Feb 20 00:08:33 crc systemd[1]: Starting Kubernetes Kubelet... Feb 20 00:08:34 crc kubenswrapper[5107]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 20 00:08:34 crc kubenswrapper[5107]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 20 00:08:34 crc kubenswrapper[5107]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 20 00:08:34 crc kubenswrapper[5107]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 20 00:08:34 crc kubenswrapper[5107]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 20 00:08:34 crc kubenswrapper[5107]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.143928 5107 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151254 5107 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151286 5107 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151295 5107 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151303 5107 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151312 5107 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151323 5107 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151332 5107 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151340 5107 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151348 5107 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151356 5107 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151364 5107 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151372 5107 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151379 5107 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151387 5107 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151394 5107 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151402 5107 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151410 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151417 5107 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151424 5107 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151431 5107 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151438 5107 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151445 5107 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151455 5107 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151464 5107 feature_gate.go:328] unrecognized feature gate: Example Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151472 5107 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151479 5107 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151487 5107 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151494 5107 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151508 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151515 5107 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151523 5107 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151530 5107 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151537 5107 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151544 5107 feature_gate.go:328] unrecognized feature gate: Example2 Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151551 5107 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151559 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151566 5107 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151574 5107 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151582 5107 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151590 5107 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151597 5107 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151604 5107 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151611 5107 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151619 5107 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151626 5107 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151633 5107 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151640 5107 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151648 5107 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151655 5107 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151665 5107 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151673 5107 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151682 5107 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151691 5107 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151699 5107 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151707 5107 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151714 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151721 5107 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151728 5107 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151736 5107 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151743 5107 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151750 5107 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151757 5107 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151764 5107 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151772 5107 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151779 5107 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151786 5107 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151793 5107 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151799 5107 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151807 5107 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151815 5107 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151822 5107 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151831 5107 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151838 5107 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151845 5107 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151852 5107 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151860 5107 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151869 5107 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151877 5107 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151884 5107 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151891 5107 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151898 5107 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151905 5107 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151912 5107 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151918 5107 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151926 5107 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.151932 5107 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.152886 5107 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.152900 5107 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.152908 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.152917 5107 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.152924 5107 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.152932 5107 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.152939 5107 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.152947 5107 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.152955 5107 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.152962 5107 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.152969 5107 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.152977 5107 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.152984 5107 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.152991 5107 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.152998 5107 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153006 5107 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153012 5107 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153022 5107 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153029 5107 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153036 5107 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153043 5107 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153051 5107 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153058 5107 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153065 5107 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153073 5107 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153082 5107 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153089 5107 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153096 5107 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153104 5107 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153112 5107 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153119 5107 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153127 5107 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153134 5107 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153187 5107 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153196 5107 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153203 5107 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153210 5107 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153217 5107 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153224 5107 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153231 5107 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153238 5107 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153245 5107 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153252 5107 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153259 5107 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153266 5107 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153273 5107 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153280 5107 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153293 5107 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153299 5107 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153309 5107 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153316 5107 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153323 5107 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153330 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153337 5107 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153345 5107 feature_gate.go:328] unrecognized feature gate: Example Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153354 5107 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153365 5107 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153374 5107 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153382 5107 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153390 5107 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153398 5107 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153406 5107 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153413 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153421 5107 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153428 5107 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153435 5107 feature_gate.go:328] unrecognized feature gate: Example2 Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153443 5107 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153449 5107 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153456 5107 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153463 5107 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153470 5107 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153478 5107 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153487 5107 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153497 5107 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153505 5107 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153514 5107 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153523 5107 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153532 5107 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153541 5107 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153554 5107 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153564 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153575 5107 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153584 5107 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153596 5107 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153605 5107 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.153613 5107 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.153784 5107 flags.go:64] FLAG: --address="0.0.0.0" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.153802 5107 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.153870 5107 flags.go:64] FLAG: --anonymous-auth="true" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.153880 5107 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.153890 5107 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.153898 5107 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.153909 5107 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.153920 5107 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.153928 5107 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.153936 5107 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.153945 5107 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.153953 5107 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.153961 5107 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.153969 5107 flags.go:64] FLAG: --cgroup-root="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.153977 5107 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.153984 5107 flags.go:64] FLAG: --client-ca-file="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.153993 5107 flags.go:64] FLAG: --cloud-config="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154000 5107 flags.go:64] FLAG: --cloud-provider="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154007 5107 flags.go:64] FLAG: --cluster-dns="[]" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154018 5107 flags.go:64] FLAG: --cluster-domain="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154026 5107 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154034 5107 flags.go:64] FLAG: --config-dir="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154041 5107 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154050 5107 flags.go:64] FLAG: --container-log-max-files="5" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154060 5107 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154073 5107 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154081 5107 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154089 5107 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154097 5107 flags.go:64] FLAG: --contention-profiling="false" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154105 5107 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154113 5107 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154121 5107 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154130 5107 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154169 5107 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154181 5107 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154189 5107 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154197 5107 flags.go:64] FLAG: --enable-load-reader="false" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154205 5107 flags.go:64] FLAG: --enable-server="true" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154213 5107 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154222 5107 flags.go:64] FLAG: --event-burst="100" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154230 5107 flags.go:64] FLAG: --event-qps="50" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154238 5107 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154247 5107 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154254 5107 flags.go:64] FLAG: --eviction-hard="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154264 5107 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154271 5107 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154281 5107 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154290 5107 flags.go:64] FLAG: --eviction-soft="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154298 5107 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154307 5107 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154315 5107 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154323 5107 flags.go:64] FLAG: --experimental-mounter-path="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154331 5107 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154340 5107 flags.go:64] FLAG: --fail-swap-on="true" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154347 5107 flags.go:64] FLAG: --feature-gates="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154357 5107 flags.go:64] FLAG: --file-check-frequency="20s" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154365 5107 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154378 5107 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154386 5107 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154394 5107 flags.go:64] FLAG: --healthz-port="10248" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154402 5107 flags.go:64] FLAG: --help="false" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154409 5107 flags.go:64] FLAG: --hostname-override="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154417 5107 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154425 5107 flags.go:64] FLAG: --http-check-frequency="20s" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154433 5107 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154440 5107 flags.go:64] FLAG: --image-credential-provider-config="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154448 5107 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154457 5107 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154465 5107 flags.go:64] FLAG: --image-service-endpoint="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154474 5107 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154481 5107 flags.go:64] FLAG: --kube-api-burst="100" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154489 5107 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154497 5107 flags.go:64] FLAG: --kube-api-qps="50" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154505 5107 flags.go:64] FLAG: --kube-reserved="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154513 5107 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154521 5107 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154529 5107 flags.go:64] FLAG: --kubelet-cgroups="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154536 5107 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154544 5107 flags.go:64] FLAG: --lock-file="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154552 5107 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154561 5107 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154569 5107 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154580 5107 flags.go:64] FLAG: --log-json-split-stream="false" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154588 5107 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154596 5107 flags.go:64] FLAG: --log-text-split-stream="false" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154604 5107 flags.go:64] FLAG: --logging-format="text" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154611 5107 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154620 5107 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154632 5107 flags.go:64] FLAG: --manifest-url="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154642 5107 flags.go:64] FLAG: --manifest-url-header="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154879 5107 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154887 5107 flags.go:64] FLAG: --max-open-files="1000000" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154897 5107 flags.go:64] FLAG: --max-pods="110" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154905 5107 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154913 5107 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154921 5107 flags.go:64] FLAG: --memory-manager-policy="None" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154929 5107 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154937 5107 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154944 5107 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154952 5107 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154974 5107 flags.go:64] FLAG: --node-status-max-images="50" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154983 5107 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154991 5107 flags.go:64] FLAG: --oom-score-adj="-999" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.154998 5107 flags.go:64] FLAG: --pod-cidr="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155007 5107 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155020 5107 flags.go:64] FLAG: --pod-manifest-path="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155027 5107 flags.go:64] FLAG: --pod-max-pids="-1" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155036 5107 flags.go:64] FLAG: --pods-per-core="0" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155044 5107 flags.go:64] FLAG: --port="10250" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155052 5107 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155060 5107 flags.go:64] FLAG: --provider-id="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155068 5107 flags.go:64] FLAG: --qos-reserved="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155076 5107 flags.go:64] FLAG: --read-only-port="10255" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155084 5107 flags.go:64] FLAG: --register-node="true" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155092 5107 flags.go:64] FLAG: --register-schedulable="true" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155100 5107 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155113 5107 flags.go:64] FLAG: --registry-burst="10" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155121 5107 flags.go:64] FLAG: --registry-qps="5" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155129 5107 flags.go:64] FLAG: --reserved-cpus="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155137 5107 flags.go:64] FLAG: --reserved-memory="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155175 5107 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155183 5107 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155197 5107 flags.go:64] FLAG: --rotate-certificates="false" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155205 5107 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155213 5107 flags.go:64] FLAG: --runonce="false" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155221 5107 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155229 5107 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155237 5107 flags.go:64] FLAG: --seccomp-default="false" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155245 5107 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155253 5107 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155261 5107 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155269 5107 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155277 5107 flags.go:64] FLAG: --storage-driver-password="root" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155286 5107 flags.go:64] FLAG: --storage-driver-secure="false" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155293 5107 flags.go:64] FLAG: --storage-driver-table="stats" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155301 5107 flags.go:64] FLAG: --storage-driver-user="root" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155309 5107 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155318 5107 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155325 5107 flags.go:64] FLAG: --system-cgroups="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155333 5107 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155345 5107 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155353 5107 flags.go:64] FLAG: --tls-cert-file="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155361 5107 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155371 5107 flags.go:64] FLAG: --tls-min-version="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155378 5107 flags.go:64] FLAG: --tls-private-key-file="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155386 5107 flags.go:64] FLAG: --topology-manager-policy="none" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155394 5107 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155401 5107 flags.go:64] FLAG: --topology-manager-scope="container" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155409 5107 flags.go:64] FLAG: --v="2" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155419 5107 flags.go:64] FLAG: --version="false" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155429 5107 flags.go:64] FLAG: --vmodule="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155439 5107 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.155450 5107 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155634 5107 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155649 5107 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155658 5107 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155666 5107 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155675 5107 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155682 5107 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155689 5107 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155696 5107 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155703 5107 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155711 5107 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155718 5107 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155725 5107 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155732 5107 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155740 5107 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155747 5107 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155754 5107 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155761 5107 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155768 5107 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155775 5107 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155782 5107 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155789 5107 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155796 5107 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155802 5107 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155810 5107 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155816 5107 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155824 5107 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155830 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155838 5107 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155844 5107 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155851 5107 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155859 5107 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155872 5107 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155880 5107 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155891 5107 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155899 5107 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155907 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155914 5107 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155921 5107 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155927 5107 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155935 5107 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155942 5107 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155949 5107 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155956 5107 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155963 5107 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155970 5107 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155977 5107 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155985 5107 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155993 5107 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.155999 5107 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.156006 5107 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.156013 5107 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.156020 5107 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.156028 5107 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.156035 5107 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.156042 5107 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.156049 5107 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.156056 5107 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.156063 5107 feature_gate.go:328] unrecognized feature gate: Example Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.156070 5107 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.156077 5107 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.156084 5107 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.156091 5107 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.156098 5107 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.156108 5107 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.156115 5107 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.156124 5107 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.156132 5107 feature_gate.go:328] unrecognized feature gate: Example2 Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.156139 5107 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.156171 5107 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.156179 5107 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.156186 5107 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.156193 5107 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.156201 5107 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.156208 5107 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.156215 5107 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.156222 5107 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.156229 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.156237 5107 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.156243 5107 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.156252 5107 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.156259 5107 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.156266 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.156273 5107 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.156280 5107 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.156287 5107 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.156294 5107 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.157509 5107 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.172322 5107 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.172361 5107 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172465 5107 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172478 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172486 5107 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172493 5107 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172501 5107 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172508 5107 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172515 5107 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172523 5107 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172530 5107 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172537 5107 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172545 5107 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172552 5107 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172559 5107 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172566 5107 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172573 5107 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172580 5107 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172587 5107 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172594 5107 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172601 5107 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172610 5107 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172618 5107 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172625 5107 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172631 5107 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172638 5107 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172645 5107 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172653 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172662 5107 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172672 5107 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172680 5107 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172687 5107 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172695 5107 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172704 5107 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172712 5107 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172719 5107 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172726 5107 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172734 5107 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172742 5107 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172750 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172758 5107 feature_gate.go:328] unrecognized feature gate: Example2 Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172765 5107 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172773 5107 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172782 5107 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172791 5107 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172799 5107 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172808 5107 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172817 5107 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172827 5107 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172837 5107 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172846 5107 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172855 5107 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172863 5107 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172872 5107 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172881 5107 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172891 5107 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172900 5107 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172910 5107 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172919 5107 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172928 5107 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172937 5107 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172946 5107 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172954 5107 feature_gate.go:328] unrecognized feature gate: Example Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172963 5107 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172971 5107 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172981 5107 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.172992 5107 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173001 5107 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173010 5107 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173020 5107 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173029 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173036 5107 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173046 5107 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173059 5107 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173068 5107 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173076 5107 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173083 5107 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173091 5107 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173098 5107 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173105 5107 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173112 5107 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173119 5107 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173126 5107 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173134 5107 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173180 5107 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173190 5107 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173197 5107 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173204 5107 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.173217 5107 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173420 5107 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173433 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173441 5107 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173450 5107 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173458 5107 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173466 5107 feature_gate.go:328] unrecognized feature gate: Example2 Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173473 5107 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173480 5107 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173488 5107 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173497 5107 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173508 5107 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173519 5107 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173526 5107 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173534 5107 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173541 5107 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173548 5107 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173556 5107 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173564 5107 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173571 5107 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173579 5107 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173586 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173594 5107 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173601 5107 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173610 5107 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173617 5107 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173624 5107 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173631 5107 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173638 5107 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173645 5107 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173653 5107 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173660 5107 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173667 5107 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173675 5107 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173682 5107 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173689 5107 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173696 5107 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173703 5107 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173710 5107 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173717 5107 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173724 5107 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173732 5107 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173739 5107 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173746 5107 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173754 5107 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173761 5107 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173769 5107 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173776 5107 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173783 5107 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173790 5107 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173820 5107 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173830 5107 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173838 5107 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173845 5107 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173852 5107 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173859 5107 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173866 5107 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173873 5107 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173880 5107 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173887 5107 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173894 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173901 5107 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173909 5107 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173915 5107 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173923 5107 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173931 5107 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173938 5107 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173946 5107 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173954 5107 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173961 5107 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173969 5107 feature_gate.go:328] unrecognized feature gate: Example Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173977 5107 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173985 5107 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173992 5107 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.173999 5107 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.174006 5107 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.174016 5107 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.174028 5107 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.174039 5107 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.174048 5107 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.174059 5107 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.174068 5107 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.174078 5107 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.174088 5107 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.174097 5107 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.174106 5107 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.174115 5107 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.174130 5107 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.175161 5107 server.go:962] "Client rotation is on, will bootstrap in background" Feb 20 00:08:34 crc kubenswrapper[5107]: E0220 00:08:34.182303 5107 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.185926 5107 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.186026 5107 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.187235 5107 server.go:1019] "Starting client certificate rotation" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.187469 5107 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.187626 5107 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.218655 5107 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 20 00:08:34 crc kubenswrapper[5107]: E0220 00:08:34.222387 5107 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.223502 5107 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.243765 5107 log.go:25] "Validated CRI v1 runtime API" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.302359 5107 log.go:25] "Validated CRI v1 image API" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.304589 5107 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.311869 5107 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2026-02-20-00-02-29-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.311903 5107 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:46 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.331508 5107 manager.go:217] Machine: {Timestamp:2026-02-20 00:08:34.328445434 +0000 UTC m=+0.697103030 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33649930240 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:3738b857-e068-44b2-8a5a-d59e1fffbda6 BootID:cb8ca53b-411e-4259-ae0d-d078aa1f4c50 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824967168 Type:vfs Inodes:1048576 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:46 Capacity:1073741824 Type:vfs Inodes:4107657 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824963072 Type:vfs Inodes:4107657 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:10:df:fe Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:10:df:fe Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:ca:f1:3a Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:86:03:89 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:c1:5a:b8 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:fd:35:d4 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:b6:97:af:ed:48:aa Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:2a:ea:fa:39:58:d6 Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649930240 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.331760 5107 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.331951 5107 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.334291 5107 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.334342 5107 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.334586 5107 topology_manager.go:138] "Creating topology manager with none policy" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.334598 5107 container_manager_linux.go:306] "Creating device plugin manager" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.334622 5107 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.335563 5107 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.336674 5107 state_mem.go:36] "Initialized new in-memory state store" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.336870 5107 server.go:1267] "Using root directory" path="/var/lib/kubelet" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.342126 5107 kubelet.go:491] "Attempting to sync node with API server" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.342180 5107 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.342205 5107 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.342226 5107 kubelet.go:397] "Adding apiserver pod source" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.342262 5107 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.346683 5107 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.346711 5107 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Feb 20 00:08:34 crc kubenswrapper[5107]: E0220 00:08:34.353776 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 20 00:08:34 crc kubenswrapper[5107]: E0220 00:08:34.353952 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.355526 5107 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.355550 5107 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.363329 5107 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.363572 5107 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.364300 5107 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.366376 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.366403 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.366413 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.366422 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.366431 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.366441 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.366450 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.366459 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.366470 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.366487 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.366511 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.367044 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.368092 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.368112 5107 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.371223 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.391306 5107 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.391365 5107 server.go:1295] "Started kubelet" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.391626 5107 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.391695 5107 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.391795 5107 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.392618 5107 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 20 00:08:34 crc systemd[1]: Started Kubernetes Kubelet. Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.394475 5107 server.go:317] "Adding debug handlers to kubelet server" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.394899 5107 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.394974 5107 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.396052 5107 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.396062 5107 volume_manager.go:295] "The desired_state_of_world populator starts" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.396136 5107 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 20 00:08:34 crc kubenswrapper[5107]: E0220 00:08:34.396249 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:08:34 crc kubenswrapper[5107]: E0220 00:08:34.398065 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 20 00:08:34 crc kubenswrapper[5107]: E0220 00:08:34.399403 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="200ms" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.400607 5107 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.400640 5107 factory.go:55] Registering systemd factory Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.400654 5107 factory.go:223] Registration of the systemd container factory successfully Feb 20 00:08:34 crc kubenswrapper[5107]: E0220 00:08:34.397537 5107 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.180:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1895cbce1384c4c8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:34.391327944 +0000 UTC m=+0.759985520,LastTimestamp:2026-02-20 00:08:34.391327944 +0000 UTC m=+0.759985520,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.402371 5107 factory.go:153] Registering CRI-O factory Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.402410 5107 factory.go:223] Registration of the crio container factory successfully Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.402452 5107 factory.go:103] Registering Raw factory Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.402480 5107 manager.go:1196] Started watching for new ooms in manager Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.403679 5107 manager.go:319] Starting recovery of all containers Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.441670 5107 manager.go:324] Recovery completed Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.456537 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.457634 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.457681 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.457692 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.458503 5107 cpu_manager.go:222] "Starting CPU manager" policy="none" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.458552 5107 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.458569 5107 state_mem.go:36] "Initialized new in-memory state store" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.467564 5107 policy_none.go:49] "None policy: Start" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.467583 5107 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.467598 5107 state_mem.go:35] "Initializing new in-memory state store" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.482656 5107 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.482876 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.482972 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.482987 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.482999 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483011 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483023 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483041 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483052 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483065 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483075 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483086 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483097 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483110 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483121 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483134 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483164 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483177 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483187 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483197 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483207 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483219 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483231 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483243 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483254 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483265 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483277 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483287 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483298 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483313 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483324 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483335 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483365 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483381 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483391 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483403 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483415 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483430 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483445 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483460 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483478 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483488 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483500 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483511 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483521 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483533 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483544 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483554 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483565 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483577 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483589 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483600 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483611 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483622 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483632 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483643 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483654 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483670 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483680 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483691 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483700 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483712 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483723 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483733 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483745 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483757 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483767 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483777 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483788 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483798 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483809 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483819 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483830 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483841 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483852 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483863 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483874 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483886 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483898 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483908 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483919 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483932 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483944 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483955 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483967 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483977 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483988 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.483999 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484045 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484056 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484068 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484079 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484090 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484102 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484112 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484122 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484134 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484165 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484177 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484188 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484199 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484227 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484240 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484251 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484262 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484272 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484284 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484295 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484306 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484318 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484328 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484339 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484350 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484377 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484386 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484397 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484407 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484419 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484442 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484455 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484466 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484476 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484486 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484498 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484514 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484524 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484536 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484548 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484558 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484572 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484584 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484594 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484605 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484617 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484627 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484639 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484650 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484661 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484671 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484701 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484711 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484723 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484734 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484744 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484755 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484769 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484779 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484790 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484797 5107 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484826 5107 status_manager.go:230] "Starting to sync pod status with apiserver" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484847 5107 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484856 5107 kubelet.go:2451] "Starting kubelet main sync loop" Feb 20 00:08:34 crc kubenswrapper[5107]: E0220 00:08:34.484955 5107 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.484800 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.485316 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.485377 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.485401 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.485421 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.485440 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.485463 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.485487 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.485516 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.485544 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.485568 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.485591 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.485618 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.485644 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.485666 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.485691 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.485759 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.485784 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.485809 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.485833 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: E0220 00:08:34.488334 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491244 5107 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491276 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491294 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491307 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491321 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491334 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491346 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491359 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491373 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491385 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491398 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491418 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491430 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491445 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491458 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491472 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491493 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491528 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491545 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491559 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491573 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491586 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491620 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491633 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491649 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491661 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491673 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491686 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491698 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491710 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491722 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491734 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491747 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491759 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491771 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491785 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491800 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491816 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491835 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491848 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491860 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491871 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491887 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491899 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491911 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491924 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491936 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491948 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491959 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491970 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491983 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.491995 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.492006 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.492018 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.492029 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.492041 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.492053 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.492066 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.492116 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.492129 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.492172 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.492198 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.492210 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.492221 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.492233 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.492243 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.492257 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.492268 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.492296 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.492308 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.492319 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.492330 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.492341 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.492352 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.492362 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.492373 5107 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.492384 5107 reconstruct.go:97] "Volume reconstruction finished" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.492392 5107 reconciler.go:26] "Reconciler: start to sync state" Feb 20 00:08:34 crc kubenswrapper[5107]: E0220 00:08:34.496315 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.537915 5107 manager.go:341] "Starting Device Plugin manager" Feb 20 00:08:34 crc kubenswrapper[5107]: E0220 00:08:34.538368 5107 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.538428 5107 server.go:85] "Starting device plugin registration server" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.539231 5107 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.539293 5107 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.539819 5107 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.539959 5107 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.539976 5107 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 20 00:08:34 crc kubenswrapper[5107]: E0220 00:08:34.543875 5107 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Feb 20 00:08:34 crc kubenswrapper[5107]: E0220 00:08:34.543925 5107 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.585065 5107 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.585314 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.586171 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.586208 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.586221 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.586986 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.587266 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.587295 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.587913 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.587936 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.587949 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.588574 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.588792 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.588818 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.588965 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.588980 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.588989 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.589740 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.589829 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.589902 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.590048 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.590074 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.590086 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.590821 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.591031 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.591103 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.591928 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.591970 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.591984 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.592051 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.592092 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.592109 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.593502 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.593573 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.593617 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.593990 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.594008 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.594019 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.594235 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.594269 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.594286 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.596462 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.596507 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.600603 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.601229 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.601305 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:34 crc kubenswrapper[5107]: E0220 00:08:34.601791 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="400ms" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.640441 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.641363 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.641458 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.641490 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.641543 5107 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 20 00:08:34 crc kubenswrapper[5107]: E0220 00:08:34.642278 5107 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.180:6443: connect: connection refused" node="crc" Feb 20 00:08:34 crc kubenswrapper[5107]: E0220 00:08:34.642358 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:34 crc kubenswrapper[5107]: E0220 00:08:34.671735 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:34 crc kubenswrapper[5107]: E0220 00:08:34.692980 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.693899 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.693967 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.694010 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.694542 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.694595 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.694634 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.694670 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.694709 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.694814 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.694849 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.694880 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.694927 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.694960 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.695004 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.695039 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.695070 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.695102 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.695131 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.695197 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.695248 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.695279 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.695310 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.695342 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.695777 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.695878 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.696408 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.696545 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.696575 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.696673 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.696903 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: E0220 00:08:34.721522 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:34 crc kubenswrapper[5107]: E0220 00:08:34.733287 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.796677 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.796801 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.796817 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.797021 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.797058 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.797098 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.797130 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.797203 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.797248 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.797261 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.797285 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.797195 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.797294 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.797318 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.797327 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.797345 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.797356 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.797369 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.797387 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.797397 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.797424 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.797448 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.797457 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.797483 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.797493 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.797525 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.797554 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.797583 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.797597 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.797641 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.797681 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.797774 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.843335 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.844383 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.844450 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.844472 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.844508 5107 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 20 00:08:34 crc kubenswrapper[5107]: E0220 00:08:34.845233 5107 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.180:6443: connect: connection refused" node="crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.943573 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.973319 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: I0220 00:08:34.993864 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:08:34 crc kubenswrapper[5107]: W0220 00:08:34.997060 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-d7c9d2a6c9ccdc53f6e4cc6266d9ee0d6e9d95f36d7c0de853d1bb8ba09e2f54 WatchSource:0}: Error finding container d7c9d2a6c9ccdc53f6e4cc6266d9ee0d6e9d95f36d7c0de853d1bb8ba09e2f54: Status 404 returned error can't find the container with id d7c9d2a6c9ccdc53f6e4cc6266d9ee0d6e9d95f36d7c0de853d1bb8ba09e2f54 Feb 20 00:08:35 crc kubenswrapper[5107]: E0220 00:08:35.002372 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="800ms" Feb 20 00:08:35 crc kubenswrapper[5107]: I0220 00:08:35.020850 5107 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 20 00:08:35 crc kubenswrapper[5107]: I0220 00:08:35.021969 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:08:35 crc kubenswrapper[5107]: I0220 00:08:35.034135 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 20 00:08:35 crc kubenswrapper[5107]: W0220 00:08:35.068633 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-f48491b4f525e2e0ab37d67066e4cb32d19ee4deba314f7915813436d442b6da WatchSource:0}: Error finding container f48491b4f525e2e0ab37d67066e4cb32d19ee4deba314f7915813436d442b6da: Status 404 returned error can't find the container with id f48491b4f525e2e0ab37d67066e4cb32d19ee4deba314f7915813436d442b6da Feb 20 00:08:35 crc kubenswrapper[5107]: W0220 00:08:35.082280 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b638b8f4bb0070e40528db779baf6a2.slice/crio-27fddfedf7d6d954dda0356595f73bbc315b7840cedacad074b426b9d98cd707 WatchSource:0}: Error finding container 27fddfedf7d6d954dda0356595f73bbc315b7840cedacad074b426b9d98cd707: Status 404 returned error can't find the container with id 27fddfedf7d6d954dda0356595f73bbc315b7840cedacad074b426b9d98cd707 Feb 20 00:08:35 crc kubenswrapper[5107]: I0220 00:08:35.246439 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:35 crc kubenswrapper[5107]: I0220 00:08:35.247679 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:35 crc kubenswrapper[5107]: I0220 00:08:35.247728 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:35 crc kubenswrapper[5107]: I0220 00:08:35.247738 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:35 crc kubenswrapper[5107]: I0220 00:08:35.247760 5107 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 20 00:08:35 crc kubenswrapper[5107]: E0220 00:08:35.248302 5107 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.180:6443: connect: connection refused" node="crc" Feb 20 00:08:35 crc kubenswrapper[5107]: E0220 00:08:35.360414 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 20 00:08:35 crc kubenswrapper[5107]: I0220 00:08:35.372206 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Feb 20 00:08:35 crc kubenswrapper[5107]: E0220 00:08:35.470980 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 20 00:08:35 crc kubenswrapper[5107]: I0220 00:08:35.492591 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"27fddfedf7d6d954dda0356595f73bbc315b7840cedacad074b426b9d98cd707"} Feb 20 00:08:35 crc kubenswrapper[5107]: I0220 00:08:35.493754 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"f48491b4f525e2e0ab37d67066e4cb32d19ee4deba314f7915813436d442b6da"} Feb 20 00:08:35 crc kubenswrapper[5107]: I0220 00:08:35.494602 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"fd957d0a75be800108e8073f291eae4bb5e850d89ba5ffc3c089d926599cda6b"} Feb 20 00:08:35 crc kubenswrapper[5107]: I0220 00:08:35.495487 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"67d9254537680f83ae0f41128e1db06c62895dfe77c9fadacfcaa89adcb39474"} Feb 20 00:08:35 crc kubenswrapper[5107]: I0220 00:08:35.496682 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"d7c9d2a6c9ccdc53f6e4cc6266d9ee0d6e9d95f36d7c0de853d1bb8ba09e2f54"} Feb 20 00:08:35 crc kubenswrapper[5107]: E0220 00:08:35.519961 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 20 00:08:35 crc kubenswrapper[5107]: E0220 00:08:35.531642 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 20 00:08:35 crc kubenswrapper[5107]: E0220 00:08:35.804216 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="1.6s" Feb 20 00:08:36 crc kubenswrapper[5107]: I0220 00:08:36.048907 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:36 crc kubenswrapper[5107]: I0220 00:08:36.049796 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:36 crc kubenswrapper[5107]: I0220 00:08:36.049833 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:36 crc kubenswrapper[5107]: I0220 00:08:36.049842 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:36 crc kubenswrapper[5107]: I0220 00:08:36.049865 5107 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 20 00:08:36 crc kubenswrapper[5107]: E0220 00:08:36.050308 5107 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.180:6443: connect: connection refused" node="crc" Feb 20 00:08:36 crc kubenswrapper[5107]: I0220 00:08:36.244895 5107 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Feb 20 00:08:36 crc kubenswrapper[5107]: E0220 00:08:36.246784 5107 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Feb 20 00:08:36 crc kubenswrapper[5107]: I0220 00:08:36.372696 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Feb 20 00:08:36 crc kubenswrapper[5107]: I0220 00:08:36.502066 5107 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="00525d7855a25a393026add38fc9e33a1183e3d9dcd334a8be72ccf6b8b885d4" exitCode=0 Feb 20 00:08:36 crc kubenswrapper[5107]: I0220 00:08:36.502196 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"00525d7855a25a393026add38fc9e33a1183e3d9dcd334a8be72ccf6b8b885d4"} Feb 20 00:08:36 crc kubenswrapper[5107]: I0220 00:08:36.502230 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:36 crc kubenswrapper[5107]: I0220 00:08:36.503299 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:36 crc kubenswrapper[5107]: I0220 00:08:36.503357 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:36 crc kubenswrapper[5107]: I0220 00:08:36.503374 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:36 crc kubenswrapper[5107]: E0220 00:08:36.503606 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:36 crc kubenswrapper[5107]: I0220 00:08:36.504870 5107 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="c002cafcf30b467c4db98b57fe8122464f6f6d556fd6ac134174ad43e52f7ad5" exitCode=0 Feb 20 00:08:36 crc kubenswrapper[5107]: I0220 00:08:36.504955 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"c002cafcf30b467c4db98b57fe8122464f6f6d556fd6ac134174ad43e52f7ad5"} Feb 20 00:08:36 crc kubenswrapper[5107]: I0220 00:08:36.505066 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:36 crc kubenswrapper[5107]: I0220 00:08:36.506413 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:36 crc kubenswrapper[5107]: I0220 00:08:36.506475 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:36 crc kubenswrapper[5107]: I0220 00:08:36.506495 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:36 crc kubenswrapper[5107]: E0220 00:08:36.506788 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:36 crc kubenswrapper[5107]: I0220 00:08:36.508398 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"ffb40cfb5387637dd74538e38d6fb34e7cc8da65f8ad2eff1895f6909ca0c654"} Feb 20 00:08:36 crc kubenswrapper[5107]: I0220 00:08:36.508461 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"016de0626e1bb48e4a214e94d6f0fbe89c072d510e904bc496f85ef33fa1ccbd"} Feb 20 00:08:36 crc kubenswrapper[5107]: I0220 00:08:36.510616 5107 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="923e75e5853dfb6b0bc6213ad0529fecf43cf5f9f3efceb7424aeeb52c870fae" exitCode=0 Feb 20 00:08:36 crc kubenswrapper[5107]: I0220 00:08:36.510707 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"923e75e5853dfb6b0bc6213ad0529fecf43cf5f9f3efceb7424aeeb52c870fae"} Feb 20 00:08:36 crc kubenswrapper[5107]: I0220 00:08:36.510756 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:36 crc kubenswrapper[5107]: I0220 00:08:36.511337 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:36 crc kubenswrapper[5107]: I0220 00:08:36.511365 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:36 crc kubenswrapper[5107]: I0220 00:08:36.511378 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:36 crc kubenswrapper[5107]: E0220 00:08:36.511544 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:36 crc kubenswrapper[5107]: I0220 00:08:36.512925 5107 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="5d8e833acadd77049f21cdf569b596c3527f1a08d0fc3f94ffc6e0246d99b234" exitCode=0 Feb 20 00:08:36 crc kubenswrapper[5107]: I0220 00:08:36.512973 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"5d8e833acadd77049f21cdf569b596c3527f1a08d0fc3f94ffc6e0246d99b234"} Feb 20 00:08:36 crc kubenswrapper[5107]: I0220 00:08:36.513079 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:36 crc kubenswrapper[5107]: I0220 00:08:36.513760 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:36 crc kubenswrapper[5107]: I0220 00:08:36.513788 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:36 crc kubenswrapper[5107]: I0220 00:08:36.513800 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:36 crc kubenswrapper[5107]: E0220 00:08:36.513980 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:36 crc kubenswrapper[5107]: I0220 00:08:36.517725 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:36 crc kubenswrapper[5107]: I0220 00:08:36.519863 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:36 crc kubenswrapper[5107]: I0220 00:08:36.519892 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:36 crc kubenswrapper[5107]: I0220 00:08:36.519904 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:37 crc kubenswrapper[5107]: E0220 00:08:37.359194 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 20 00:08:37 crc kubenswrapper[5107]: I0220 00:08:37.372324 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Feb 20 00:08:37 crc kubenswrapper[5107]: E0220 00:08:37.407493 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="3.2s" Feb 20 00:08:37 crc kubenswrapper[5107]: I0220 00:08:37.519107 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"750bdb9c855b0e7e5aab93bb7f41b27a004c22584c4a40c4bfa03679ec1a2533"} Feb 20 00:08:37 crc kubenswrapper[5107]: I0220 00:08:37.519201 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"b2c0ff440f5c8cffa2641d2bbd39fe8e52017724608d93d4b248240be6a0af24"} Feb 20 00:08:37 crc kubenswrapper[5107]: I0220 00:08:37.519219 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"21cc3154b74aece1eed247b85c80f9f2c7d77280e04711ce90bc778752a8738a"} Feb 20 00:08:37 crc kubenswrapper[5107]: I0220 00:08:37.519234 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"7a92dad77499ef534f5d281fd3c3fc2e3d25902901a8f84b5a0a2ea296d9a91c"} Feb 20 00:08:37 crc kubenswrapper[5107]: I0220 00:08:37.521994 5107 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="1554df061f39fc77ea8d400750c1c8dd190bd9c2803471eba36e8d53b96965d6" exitCode=0 Feb 20 00:08:37 crc kubenswrapper[5107]: I0220 00:08:37.522074 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"1554df061f39fc77ea8d400750c1c8dd190bd9c2803471eba36e8d53b96965d6"} Feb 20 00:08:37 crc kubenswrapper[5107]: I0220 00:08:37.522194 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:37 crc kubenswrapper[5107]: I0220 00:08:37.523259 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:37 crc kubenswrapper[5107]: I0220 00:08:37.523344 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:37 crc kubenswrapper[5107]: I0220 00:08:37.523362 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:37 crc kubenswrapper[5107]: E0220 00:08:37.523627 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:37 crc kubenswrapper[5107]: I0220 00:08:37.524435 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"ae855fa132589a0ff2cab5d759b987873a303aa4bb84df6a28b53fa6b464e49b"} Feb 20 00:08:37 crc kubenswrapper[5107]: I0220 00:08:37.524504 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:37 crc kubenswrapper[5107]: I0220 00:08:37.525328 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:37 crc kubenswrapper[5107]: I0220 00:08:37.525369 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:37 crc kubenswrapper[5107]: I0220 00:08:37.525386 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:37 crc kubenswrapper[5107]: E0220 00:08:37.525592 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:37 crc kubenswrapper[5107]: I0220 00:08:37.528071 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"aae38182766654da3531d1b7fce65ce54baf0eb6cdef526e425fd10caafab1ff"} Feb 20 00:08:37 crc kubenswrapper[5107]: I0220 00:08:37.528113 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"75a0b0bb620b8d44941a13deb15b5425f660fa29906845162437be55efde7325"} Feb 20 00:08:37 crc kubenswrapper[5107]: I0220 00:08:37.528123 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"58824b379b4ba7a83b26b170ff5f48aa0cbf08e2316033a29a3551130089e9a8"} Feb 20 00:08:37 crc kubenswrapper[5107]: I0220 00:08:37.528454 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:37 crc kubenswrapper[5107]: I0220 00:08:37.529525 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:37 crc kubenswrapper[5107]: I0220 00:08:37.529553 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:37 crc kubenswrapper[5107]: I0220 00:08:37.529565 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:37 crc kubenswrapper[5107]: E0220 00:08:37.529835 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:37 crc kubenswrapper[5107]: I0220 00:08:37.531863 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"e73f5bd7fd072c39cd086b8592d379aa14a144a641a7a82a2c04d566c4c7010f"} Feb 20 00:08:37 crc kubenswrapper[5107]: I0220 00:08:37.531883 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"1346c22559f69486b1be71c79b1b5d84a95c66686bd7804f6040906fd83e3d99"} Feb 20 00:08:37 crc kubenswrapper[5107]: I0220 00:08:37.531981 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:37 crc kubenswrapper[5107]: I0220 00:08:37.533759 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:37 crc kubenswrapper[5107]: I0220 00:08:37.533782 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:37 crc kubenswrapper[5107]: I0220 00:08:37.533791 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:37 crc kubenswrapper[5107]: E0220 00:08:37.533951 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:37 crc kubenswrapper[5107]: I0220 00:08:37.650793 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:37 crc kubenswrapper[5107]: I0220 00:08:37.652342 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:37 crc kubenswrapper[5107]: I0220 00:08:37.652378 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:37 crc kubenswrapper[5107]: I0220 00:08:37.652389 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:37 crc kubenswrapper[5107]: I0220 00:08:37.652416 5107 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 20 00:08:37 crc kubenswrapper[5107]: E0220 00:08:37.653011 5107 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.180:6443: connect: connection refused" node="crc" Feb 20 00:08:38 crc kubenswrapper[5107]: E0220 00:08:38.271338 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 20 00:08:38 crc kubenswrapper[5107]: I0220 00:08:38.371877 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Feb 20 00:08:38 crc kubenswrapper[5107]: E0220 00:08:38.405600 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 20 00:08:38 crc kubenswrapper[5107]: I0220 00:08:38.537842 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"e1d64ead20b75302d7fb87b5b0c61160ca334799313a0ad0cbd50aa7831cab08"} Feb 20 00:08:38 crc kubenswrapper[5107]: I0220 00:08:38.538307 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:38 crc kubenswrapper[5107]: I0220 00:08:38.541476 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:38 crc kubenswrapper[5107]: I0220 00:08:38.541695 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:38 crc kubenswrapper[5107]: I0220 00:08:38.541943 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:38 crc kubenswrapper[5107]: E0220 00:08:38.542804 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:38 crc kubenswrapper[5107]: I0220 00:08:38.542939 5107 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="3a01a0e88db61e63096ae1b23bc61fb503908fc79092380bea076ede86f03f5c" exitCode=0 Feb 20 00:08:38 crc kubenswrapper[5107]: I0220 00:08:38.543117 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"3a01a0e88db61e63096ae1b23bc61fb503908fc79092380bea076ede86f03f5c"} Feb 20 00:08:38 crc kubenswrapper[5107]: I0220 00:08:38.543182 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:38 crc kubenswrapper[5107]: I0220 00:08:38.543277 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:38 crc kubenswrapper[5107]: I0220 00:08:38.543390 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:38 crc kubenswrapper[5107]: I0220 00:08:38.543438 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:38 crc kubenswrapper[5107]: I0220 00:08:38.543473 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 20 00:08:38 crc kubenswrapper[5107]: I0220 00:08:38.544328 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:38 crc kubenswrapper[5107]: I0220 00:08:38.544347 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:38 crc kubenswrapper[5107]: I0220 00:08:38.544360 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:38 crc kubenswrapper[5107]: I0220 00:08:38.544373 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:38 crc kubenswrapper[5107]: I0220 00:08:38.544380 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:38 crc kubenswrapper[5107]: I0220 00:08:38.544379 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:38 crc kubenswrapper[5107]: I0220 00:08:38.544432 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:38 crc kubenswrapper[5107]: I0220 00:08:38.544464 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:38 crc kubenswrapper[5107]: I0220 00:08:38.544392 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:38 crc kubenswrapper[5107]: E0220 00:08:38.545035 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:38 crc kubenswrapper[5107]: E0220 00:08:38.545651 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:38 crc kubenswrapper[5107]: E0220 00:08:38.546044 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:38 crc kubenswrapper[5107]: I0220 00:08:38.547087 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:38 crc kubenswrapper[5107]: I0220 00:08:38.547336 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:38 crc kubenswrapper[5107]: I0220 00:08:38.548010 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:38 crc kubenswrapper[5107]: E0220 00:08:38.548720 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:39 crc kubenswrapper[5107]: I0220 00:08:39.550362 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"14649d9c6bacb041e78519e3f952045ad2764afa541e79ac4ebbba3f56a9fe60"} Feb 20 00:08:39 crc kubenswrapper[5107]: I0220 00:08:39.550407 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"9ab97dc99887641495dad3cb72be9ca892429d9f760851d511ba737c10aa533c"} Feb 20 00:08:39 crc kubenswrapper[5107]: I0220 00:08:39.550420 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"bd2b23ba1c015bb90e717507a601c198e1441b517ca59751798a2438d6162355"} Feb 20 00:08:39 crc kubenswrapper[5107]: I0220 00:08:39.550550 5107 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 00:08:39 crc kubenswrapper[5107]: I0220 00:08:39.550596 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:39 crc kubenswrapper[5107]: I0220 00:08:39.550553 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:39 crc kubenswrapper[5107]: I0220 00:08:39.551386 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:39 crc kubenswrapper[5107]: I0220 00:08:39.551418 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:39 crc kubenswrapper[5107]: I0220 00:08:39.551430 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:39 crc kubenswrapper[5107]: E0220 00:08:39.551954 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:39 crc kubenswrapper[5107]: I0220 00:08:39.552286 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:39 crc kubenswrapper[5107]: I0220 00:08:39.552351 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:39 crc kubenswrapper[5107]: I0220 00:08:39.552361 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:39 crc kubenswrapper[5107]: E0220 00:08:39.552741 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:40 crc kubenswrapper[5107]: I0220 00:08:40.056640 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:08:40 crc kubenswrapper[5107]: I0220 00:08:40.560310 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"3d6485423a2d33593eca47b7ecb17b0fd5cc6b1952a7ed37022254df9c794a8d"} Feb 20 00:08:40 crc kubenswrapper[5107]: I0220 00:08:40.560376 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"c7191b4213be9fa380280dfde810cd85aad6b56fefb3983b43b1e50eca05dfa3"} Feb 20 00:08:40 crc kubenswrapper[5107]: I0220 00:08:40.560486 5107 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 00:08:40 crc kubenswrapper[5107]: I0220 00:08:40.560563 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:40 crc kubenswrapper[5107]: I0220 00:08:40.560569 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:40 crc kubenswrapper[5107]: I0220 00:08:40.561556 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:40 crc kubenswrapper[5107]: I0220 00:08:40.561611 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:40 crc kubenswrapper[5107]: I0220 00:08:40.561630 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:40 crc kubenswrapper[5107]: E0220 00:08:40.561969 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:40 crc kubenswrapper[5107]: I0220 00:08:40.562194 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:40 crc kubenswrapper[5107]: I0220 00:08:40.562369 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:40 crc kubenswrapper[5107]: I0220 00:08:40.562504 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:40 crc kubenswrapper[5107]: E0220 00:08:40.563198 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:40 crc kubenswrapper[5107]: I0220 00:08:40.613724 5107 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Feb 20 00:08:40 crc kubenswrapper[5107]: I0220 00:08:40.853694 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:40 crc kubenswrapper[5107]: I0220 00:08:40.855622 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:40 crc kubenswrapper[5107]: I0220 00:08:40.855718 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:40 crc kubenswrapper[5107]: I0220 00:08:40.855746 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:40 crc kubenswrapper[5107]: I0220 00:08:40.855806 5107 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 20 00:08:41 crc kubenswrapper[5107]: I0220 00:08:41.401029 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:08:41 crc kubenswrapper[5107]: I0220 00:08:41.563690 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:41 crc kubenswrapper[5107]: I0220 00:08:41.563743 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:41 crc kubenswrapper[5107]: I0220 00:08:41.565323 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:41 crc kubenswrapper[5107]: I0220 00:08:41.565530 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:41 crc kubenswrapper[5107]: I0220 00:08:41.565683 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:41 crc kubenswrapper[5107]: I0220 00:08:41.565635 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:41 crc kubenswrapper[5107]: I0220 00:08:41.565978 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:41 crc kubenswrapper[5107]: I0220 00:08:41.566039 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:41 crc kubenswrapper[5107]: E0220 00:08:41.566591 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:41 crc kubenswrapper[5107]: E0220 00:08:41.567280 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:43 crc kubenswrapper[5107]: I0220 00:08:43.198557 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:08:43 crc kubenswrapper[5107]: I0220 00:08:43.198900 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:43 crc kubenswrapper[5107]: I0220 00:08:43.200014 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:43 crc kubenswrapper[5107]: I0220 00:08:43.200085 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:43 crc kubenswrapper[5107]: I0220 00:08:43.200113 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:43 crc kubenswrapper[5107]: E0220 00:08:43.200805 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:43 crc kubenswrapper[5107]: I0220 00:08:43.399005 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:08:43 crc kubenswrapper[5107]: I0220 00:08:43.399261 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:43 crc kubenswrapper[5107]: I0220 00:08:43.400360 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:43 crc kubenswrapper[5107]: I0220 00:08:43.400403 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:43 crc kubenswrapper[5107]: I0220 00:08:43.400421 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:43 crc kubenswrapper[5107]: E0220 00:08:43.400777 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:43 crc kubenswrapper[5107]: I0220 00:08:43.412266 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:08:43 crc kubenswrapper[5107]: I0220 00:08:43.568925 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:43 crc kubenswrapper[5107]: I0220 00:08:43.569989 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:43 crc kubenswrapper[5107]: I0220 00:08:43.570235 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:43 crc kubenswrapper[5107]: I0220 00:08:43.570392 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:43 crc kubenswrapper[5107]: E0220 00:08:43.571184 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:43 crc kubenswrapper[5107]: I0220 00:08:43.720595 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:08:44 crc kubenswrapper[5107]: I0220 00:08:44.157253 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Feb 20 00:08:44 crc kubenswrapper[5107]: I0220 00:08:44.157891 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:44 crc kubenswrapper[5107]: I0220 00:08:44.159502 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:44 crc kubenswrapper[5107]: I0220 00:08:44.159557 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:44 crc kubenswrapper[5107]: I0220 00:08:44.159599 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:44 crc kubenswrapper[5107]: E0220 00:08:44.163054 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:44 crc kubenswrapper[5107]: I0220 00:08:44.342286 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:08:44 crc kubenswrapper[5107]: E0220 00:08:44.544214 5107 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 20 00:08:44 crc kubenswrapper[5107]: I0220 00:08:44.570583 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:44 crc kubenswrapper[5107]: I0220 00:08:44.571532 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:44 crc kubenswrapper[5107]: I0220 00:08:44.571588 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:44 crc kubenswrapper[5107]: I0220 00:08:44.571610 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:44 crc kubenswrapper[5107]: E0220 00:08:44.572115 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:45 crc kubenswrapper[5107]: I0220 00:08:45.427129 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:08:45 crc kubenswrapper[5107]: I0220 00:08:45.573453 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:45 crc kubenswrapper[5107]: I0220 00:08:45.574627 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:45 crc kubenswrapper[5107]: I0220 00:08:45.574779 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:45 crc kubenswrapper[5107]: I0220 00:08:45.574806 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:45 crc kubenswrapper[5107]: E0220 00:08:45.575759 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:45 crc kubenswrapper[5107]: I0220 00:08:45.581223 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:08:45 crc kubenswrapper[5107]: I0220 00:08:45.772340 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 20 00:08:45 crc kubenswrapper[5107]: I0220 00:08:45.772719 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:45 crc kubenswrapper[5107]: I0220 00:08:45.773685 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:45 crc kubenswrapper[5107]: I0220 00:08:45.773760 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:45 crc kubenswrapper[5107]: I0220 00:08:45.773787 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:45 crc kubenswrapper[5107]: E0220 00:08:45.774554 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:46 crc kubenswrapper[5107]: I0220 00:08:46.575990 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:46 crc kubenswrapper[5107]: I0220 00:08:46.577082 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:46 crc kubenswrapper[5107]: I0220 00:08:46.577114 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:46 crc kubenswrapper[5107]: I0220 00:08:46.577125 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:46 crc kubenswrapper[5107]: E0220 00:08:46.577450 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:46 crc kubenswrapper[5107]: I0220 00:08:46.721399 5107 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 20 00:08:46 crc kubenswrapper[5107]: I0220 00:08:46.721515 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 20 00:08:47 crc kubenswrapper[5107]: I0220 00:08:47.578634 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:47 crc kubenswrapper[5107]: I0220 00:08:47.579963 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:47 crc kubenswrapper[5107]: I0220 00:08:47.580044 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:47 crc kubenswrapper[5107]: I0220 00:08:47.580055 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:47 crc kubenswrapper[5107]: E0220 00:08:47.580383 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:48 crc kubenswrapper[5107]: I0220 00:08:48.530249 5107 trace.go:236] Trace[1291979828]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Feb-2026 00:08:38.528) (total time: 10001ms): Feb 20 00:08:48 crc kubenswrapper[5107]: Trace[1291979828]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:08:48.530) Feb 20 00:08:48 crc kubenswrapper[5107]: Trace[1291979828]: [10.001243221s] [10.001243221s] END Feb 20 00:08:48 crc kubenswrapper[5107]: E0220 00:08:48.530302 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 20 00:08:49 crc kubenswrapper[5107]: I0220 00:08:49.373279 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 20 00:08:49 crc kubenswrapper[5107]: I0220 00:08:49.794051 5107 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 20 00:08:49 crc kubenswrapper[5107]: I0220 00:08:49.794201 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 20 00:08:49 crc kubenswrapper[5107]: I0220 00:08:49.806709 5107 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 20 00:08:49 crc kubenswrapper[5107]: I0220 00:08:49.806807 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 20 00:08:50 crc kubenswrapper[5107]: E0220 00:08:50.608889 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 20 00:08:52 crc kubenswrapper[5107]: E0220 00:08:52.421082 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 20 00:08:53 crc kubenswrapper[5107]: I0220 00:08:53.230798 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:08:53 crc kubenswrapper[5107]: I0220 00:08:53.231172 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:53 crc kubenswrapper[5107]: I0220 00:08:53.232435 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:53 crc kubenswrapper[5107]: I0220 00:08:53.232525 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:53 crc kubenswrapper[5107]: I0220 00:08:53.232541 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:53 crc kubenswrapper[5107]: E0220 00:08:53.232960 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:53 crc kubenswrapper[5107]: I0220 00:08:53.236971 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:08:53 crc kubenswrapper[5107]: I0220 00:08:53.597330 5107 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 00:08:53 crc kubenswrapper[5107]: I0220 00:08:53.597427 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:53 crc kubenswrapper[5107]: I0220 00:08:53.598365 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:53 crc kubenswrapper[5107]: I0220 00:08:53.598415 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:53 crc kubenswrapper[5107]: I0220 00:08:53.598428 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:53 crc kubenswrapper[5107]: E0220 00:08:53.598867 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.544466 5107 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.802423 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbce1384c4c8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:34.391327944 +0000 UTC m=+0.759985520,LastTimestamp:2026-02-20 00:08:34.391327944 +0000 UTC m=+0.759985520,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:54 crc kubenswrapper[5107]: I0220 00:08:54.802907 5107 trace.go:236] Trace[1688175189]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Feb-2026 00:08:42.702) (total time: 12100ms): Feb 20 00:08:54 crc kubenswrapper[5107]: Trace[1688175189]: ---"Objects listed" error:csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope 12100ms (00:08:54.802) Feb 20 00:08:54 crc kubenswrapper[5107]: Trace[1688175189]: [12.100478234s] [12.100478234s] END Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.802938 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 20 00:08:54 crc kubenswrapper[5107]: I0220 00:08:54.803124 5107 trace.go:236] Trace[1026558979]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Feb-2026 00:08:43.181) (total time: 11621ms): Feb 20 00:08:54 crc kubenswrapper[5107]: Trace[1026558979]: ---"Objects listed" error:runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope 11621ms (00:08:54.803) Feb 20 00:08:54 crc kubenswrapper[5107]: Trace[1026558979]: [11.621437368s] [11.621437368s] END Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.803196 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.803702 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbce177913a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:34.457670561 +0000 UTC m=+0.826328117,LastTimestamp:2026-02-20 00:08:34.457670561 +0000 UTC m=+0.826328117,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:54 crc kubenswrapper[5107]: I0220 00:08:54.804551 5107 trace.go:236] Trace[1437090768]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Feb-2026 00:08:42.327) (total time: 12476ms): Feb 20 00:08:54 crc kubenswrapper[5107]: Trace[1437090768]: ---"Objects listed" error:nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope 12476ms (00:08:54.804) Feb 20 00:08:54 crc kubenswrapper[5107]: Trace[1437090768]: [12.476742359s] [12.476742359s] END Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.804597 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.805472 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbce17795073 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:34.457686131 +0000 UTC m=+0.826343687,LastTimestamp:2026-02-20 00:08:34.457686131 +0000 UTC m=+0.826343687,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.806603 5107 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.809251 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbce17797e1e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:34.457697822 +0000 UTC m=+0.826355388,LastTimestamp:2026-02-20 00:08:34.457697822 +0000 UTC m=+0.826355388,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:54 crc kubenswrapper[5107]: I0220 00:08:54.812007 5107 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.812942 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbce1c85c707 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:34.542388999 +0000 UTC m=+0.911046605,LastTimestamp:2026-02-20 00:08:34.542388999 +0000 UTC m=+0.911046605,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.815181 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1895cbce177913a1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbce177913a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:34.457670561 +0000 UTC m=+0.826328117,LastTimestamp:2026-02-20 00:08:34.586194277 +0000 UTC m=+0.954851843,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.818475 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1895cbce17795073\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbce17795073 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:34.457686131 +0000 UTC m=+0.826343687,LastTimestamp:2026-02-20 00:08:34.586215057 +0000 UTC m=+0.954872623,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.820368 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1895cbce17797e1e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbce17797e1e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:34.457697822 +0000 UTC m=+0.826355388,LastTimestamp:2026-02-20 00:08:34.586226178 +0000 UTC m=+0.954883744,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.823823 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1895cbce177913a1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbce177913a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:34.457670561 +0000 UTC m=+0.826328117,LastTimestamp:2026-02-20 00:08:34.587928406 +0000 UTC m=+0.956585972,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.827263 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1895cbce17795073\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbce17795073 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:34.457686131 +0000 UTC m=+0.826343687,LastTimestamp:2026-02-20 00:08:34.587941886 +0000 UTC m=+0.956599452,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.829751 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1895cbce17797e1e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbce17797e1e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:34.457697822 +0000 UTC m=+0.826355388,LastTimestamp:2026-02-20 00:08:34.587954086 +0000 UTC m=+0.956611652,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.838495 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1895cbce177913a1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbce177913a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:34.457670561 +0000 UTC m=+0.826328117,LastTimestamp:2026-02-20 00:08:34.588974163 +0000 UTC m=+0.957631739,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.845036 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1895cbce17795073\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbce17795073 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:34.457686131 +0000 UTC m=+0.826343687,LastTimestamp:2026-02-20 00:08:34.588985273 +0000 UTC m=+0.957642839,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:54 crc kubenswrapper[5107]: I0220 00:08:54.853945 5107 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:40386->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 20 00:08:54 crc kubenswrapper[5107]: I0220 00:08:54.854024 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:40386->192.168.126.11:17697: read: connection reset by peer" Feb 20 00:08:54 crc kubenswrapper[5107]: I0220 00:08:54.853946 5107 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:40398->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 20 00:08:54 crc kubenswrapper[5107]: I0220 00:08:54.854091 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:40398->192.168.126.11:17697: read: connection reset by peer" Feb 20 00:08:54 crc kubenswrapper[5107]: I0220 00:08:54.854373 5107 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 20 00:08:54 crc kubenswrapper[5107]: I0220 00:08:54.854404 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.855687 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1895cbce17797e1e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbce17797e1e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:34.457697822 +0000 UTC m=+0.826355388,LastTimestamp:2026-02-20 00:08:34.588994423 +0000 UTC m=+0.957651989,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:54 crc kubenswrapper[5107]: I0220 00:08:54.865253 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:08:54 crc kubenswrapper[5107]: I0220 00:08:54.865479 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.865857 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1895cbce177913a1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbce177913a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:34.457670561 +0000 UTC m=+0.826328117,LastTimestamp:2026-02-20 00:08:34.589819067 +0000 UTC m=+0.958476653,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:54 crc kubenswrapper[5107]: I0220 00:08:54.866413 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:54 crc kubenswrapper[5107]: I0220 00:08:54.866457 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:54 crc kubenswrapper[5107]: I0220 00:08:54.866473 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.866879 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:54 crc kubenswrapper[5107]: I0220 00:08:54.871288 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.872388 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1895cbce17795073\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbce17795073 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:34.457686131 +0000 UTC m=+0.826343687,LastTimestamp:2026-02-20 00:08:34.589893858 +0000 UTC m=+0.958551434,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.878463 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1895cbce17797e1e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbce17797e1e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:34.457697822 +0000 UTC m=+0.826355388,LastTimestamp:2026-02-20 00:08:34.589966929 +0000 UTC m=+0.958624515,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.884373 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1895cbce177913a1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbce177913a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:34.457670561 +0000 UTC m=+0.826328117,LastTimestamp:2026-02-20 00:08:34.590065781 +0000 UTC m=+0.958723347,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.890611 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1895cbce17795073\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbce17795073 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:34.457686131 +0000 UTC m=+0.826343687,LastTimestamp:2026-02-20 00:08:34.590080351 +0000 UTC m=+0.958737917,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.896780 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1895cbce17797e1e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbce17797e1e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:34.457697822 +0000 UTC m=+0.826355388,LastTimestamp:2026-02-20 00:08:34.590092161 +0000 UTC m=+0.958749727,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.906057 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1895cbce177913a1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbce177913a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:34.457670561 +0000 UTC m=+0.826328117,LastTimestamp:2026-02-20 00:08:34.591961762 +0000 UTC m=+0.960619328,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.911760 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1895cbce17795073\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbce17795073 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:34.457686131 +0000 UTC m=+0.826343687,LastTimestamp:2026-02-20 00:08:34.591978152 +0000 UTC m=+0.960635718,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.917442 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1895cbce17797e1e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbce17797e1e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:34.457697822 +0000 UTC m=+0.826355388,LastTimestamp:2026-02-20 00:08:34.591989462 +0000 UTC m=+0.960647028,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.923958 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1895cbce177913a1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbce177913a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:34.457670561 +0000 UTC m=+0.826328117,LastTimestamp:2026-02-20 00:08:34.592077714 +0000 UTC m=+0.960735290,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.931521 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1895cbce17795073\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1895cbce17795073 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:34.457686131 +0000 UTC m=+0.826343687,LastTimestamp:2026-02-20 00:08:34.592104224 +0000 UTC m=+0.960761800,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.940210 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1895cbce3912327e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:35.021353598 +0000 UTC m=+1.390011204,LastTimestamp:2026-02-20 00:08:35.021353598 +0000 UTC m=+1.390011204,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.947555 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbce3a8c1a75 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:35.046120053 +0000 UTC m=+1.414777619,LastTimestamp:2026-02-20 00:08:35.046120053 +0000 UTC m=+1.414777619,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.952328 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbce3a979646 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:35.046872646 +0000 UTC m=+1.415530212,LastTimestamp:2026-02-20 00:08:35.046872646 +0000 UTC m=+1.415530212,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.958740 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1895cbce3c6917a6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:35.077380006 +0000 UTC m=+1.446037612,LastTimestamp:2026-02-20 00:08:35.077380006 +0000 UTC m=+1.446037612,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.966191 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1895cbce3d7803dd openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:35.095135197 +0000 UTC m=+1.463792803,LastTimestamp:2026-02-20 00:08:35.095135197 +0000 UTC m=+1.463792803,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.972762 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1895cbce63d15a4e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:35.738524238 +0000 UTC m=+2.107181814,LastTimestamp:2026-02-20 00:08:35.738524238 +0000 UTC m=+2.107181814,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.978631 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1895cbce63d6642c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:35.738854444 +0000 UTC m=+2.107512010,LastTimestamp:2026-02-20 00:08:35.738854444 +0000 UTC m=+2.107512010,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.984302 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1895cbce63e92fa8 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:35.740086184 +0000 UTC m=+2.108743750,LastTimestamp:2026-02-20 00:08:35.740086184 +0000 UTC m=+2.108743750,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.989839 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbce64065287 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:35.741995655 +0000 UTC m=+2.110653211,LastTimestamp:2026-02-20 00:08:35.741995655 +0000 UTC m=+2.110653211,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:54 crc kubenswrapper[5107]: E0220 00:08:54.997644 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbce643749e4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:35.745204708 +0000 UTC m=+2.113862274,LastTimestamp:2026-02-20 00:08:35.745204708 +0000 UTC m=+2.113862274,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.004455 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1895cbce646360a3 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:35.748094115 +0000 UTC m=+2.116751681,LastTimestamp:2026-02-20 00:08:35.748094115 +0000 UTC m=+2.116751681,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.011982 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1895cbce64eab1d4 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:35.75696226 +0000 UTC m=+2.125619826,LastTimestamp:2026-02-20 00:08:35.75696226 +0000 UTC m=+2.125619826,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.016835 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1895cbce64f536c4 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:35.757651652 +0000 UTC m=+2.126309218,LastTimestamp:2026-02-20 00:08:35.757651652 +0000 UTC m=+2.126309218,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.023078 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbce64f5fb05 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:35.757701893 +0000 UTC m=+2.126359479,LastTimestamp:2026-02-20 00:08:35.757701893 +0000 UTC m=+2.126359479,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.028318 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbce64fb8733 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:35.758065459 +0000 UTC m=+2.126723015,LastTimestamp:2026-02-20 00:08:35.758065459 +0000 UTC m=+2.126723015,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.035092 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1895cbce650210e6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:35.758493926 +0000 UTC m=+2.127151492,LastTimestamp:2026-02-20 00:08:35.758493926 +0000 UTC m=+2.127151492,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.043015 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1895cbce78720251 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:36.084597329 +0000 UTC m=+2.453254905,LastTimestamp:2026-02-20 00:08:36.084597329 +0000 UTC m=+2.453254905,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.048584 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1895cbce793fea32 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:36.09809157 +0000 UTC m=+2.466749166,LastTimestamp:2026-02-20 00:08:36.09809157 +0000 UTC m=+2.466749166,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.056604 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1895cbce795802c6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:36.099670726 +0000 UTC m=+2.468328302,LastTimestamp:2026-02-20 00:08:36.099670726 +0000 UTC m=+2.468328302,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.061886 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1895cbce91805091 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:36.504965265 +0000 UTC m=+2.873622841,LastTimestamp:2026-02-20 00:08:36.504965265 +0000 UTC m=+2.873622841,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.067862 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1895cbce91b400e6 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:36.508352742 +0000 UTC m=+2.877010348,LastTimestamp:2026-02-20 00:08:36.508352742 +0000 UTC m=+2.877010348,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.078388 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbce92369148 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:36.516909384 +0000 UTC m=+2.885566960,LastTimestamp:2026-02-20 00:08:36.516909384 +0000 UTC m=+2.885566960,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.084121 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbce923f800a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:36.517494794 +0000 UTC m=+2.886152370,LastTimestamp:2026-02-20 00:08:36.517494794 +0000 UTC m=+2.886152370,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.092858 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1895cbcea20b4371 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:36.782506865 +0000 UTC m=+3.151164431,LastTimestamp:2026-02-20 00:08:36.782506865 +0000 UTC m=+3.151164431,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.098486 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbcea21dcb8c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:36.783721356 +0000 UTC m=+3.152378922,LastTimestamp:2026-02-20 00:08:36.783721356 +0000 UTC m=+3.152378922,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.103648 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1895cbcea221a95e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:36.78397475 +0000 UTC m=+3.152632316,LastTimestamp:2026-02-20 00:08:36.78397475 +0000 UTC m=+3.152632316,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.108905 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbcea227fd79 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:36.784389497 +0000 UTC m=+3.153047063,LastTimestamp:2026-02-20 00:08:36.784389497 +0000 UTC m=+3.153047063,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.115720 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1895cbcea30e93f3 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:36.799501299 +0000 UTC m=+3.168158865,LastTimestamp:2026-02-20 00:08:36.799501299 +0000 UTC m=+3.168158865,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.118727 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1895cbcea3213855 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:36.800723029 +0000 UTC m=+3.169380595,LastTimestamp:2026-02-20 00:08:36.800723029 +0000 UTC m=+3.169380595,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.122699 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbcea36a72d1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:36.805522129 +0000 UTC m=+3.174179695,LastTimestamp:2026-02-20 00:08:36.805522129 +0000 UTC m=+3.174179695,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.127936 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbcea3be7711 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:36.811028241 +0000 UTC m=+3.179685807,LastTimestamp:2026-02-20 00:08:36.811028241 +0000 UTC m=+3.179685807,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.133606 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1895cbcea47312f5 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:36.822864629 +0000 UTC m=+3.191522185,LastTimestamp:2026-02-20 00:08:36.822864629 +0000 UTC m=+3.191522185,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.139254 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbcea4a066f0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:36.825835248 +0000 UTC m=+3.194492814,LastTimestamp:2026-02-20 00:08:36.825835248 +0000 UTC m=+3.194492814,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.143840 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1895cbceaa5d428a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:36.922098314 +0000 UTC m=+3.290755880,LastTimestamp:2026-02-20 00:08:36.922098314 +0000 UTC m=+3.290755880,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.148982 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1895cbceab4d67e5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:36.937836517 +0000 UTC m=+3.306494083,LastTimestamp:2026-02-20 00:08:36.937836517 +0000 UTC m=+3.306494083,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.152987 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1895cbceab727bb1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:36.940266417 +0000 UTC m=+3.308923983,LastTimestamp:2026-02-20 00:08:36.940266417 +0000 UTC m=+3.308923983,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.157173 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1895cbceaeb643c9 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:36.995040201 +0000 UTC m=+3.363697767,LastTimestamp:2026-02-20 00:08:36.995040201 +0000 UTC m=+3.363697767,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.164698 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbceaed5d5e9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:36.997109225 +0000 UTC m=+3.365766791,LastTimestamp:2026-02-20 00:08:36.997109225 +0000 UTC m=+3.365766791,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.168861 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1895cbceaf82140c openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:37.008397324 +0000 UTC m=+3.377054910,LastTimestamp:2026-02-20 00:08:37.008397324 +0000 UTC m=+3.377054910,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.174182 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1895cbceaf954217 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:37.009654295 +0000 UTC m=+3.378311861,LastTimestamp:2026-02-20 00:08:37.009654295 +0000 UTC m=+3.378311861,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.179121 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbceafc18307 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:37.012554503 +0000 UTC m=+3.381212069,LastTimestamp:2026-02-20 00:08:37.012554503 +0000 UTC m=+3.381212069,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.184218 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbceafcdcd55 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:37.013359957 +0000 UTC m=+3.382017523,LastTimestamp:2026-02-20 00:08:37.013359957 +0000 UTC m=+3.382017523,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.190860 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1895cbceb8c83609 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:37.163988489 +0000 UTC m=+3.532646055,LastTimestamp:2026-02-20 00:08:37.163988489 +0000 UTC m=+3.532646055,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.196047 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1895cbceba88e836 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:37.19339423 +0000 UTC m=+3.562051786,LastTimestamp:2026-02-20 00:08:37.19339423 +0000 UTC m=+3.562051786,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.201504 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1895cbcebbf3dfc5 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:37.217181637 +0000 UTC m=+3.585839203,LastTimestamp:2026-02-20 00:08:37.217181637 +0000 UTC m=+3.585839203,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.207919 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbcebc2cc91d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:37.220911389 +0000 UTC m=+3.589568945,LastTimestamp:2026-02-20 00:08:37.220911389 +0000 UTC m=+3.589568945,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.212493 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1895cbcebce64bfc openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:37.233069052 +0000 UTC m=+3.601726618,LastTimestamp:2026-02-20 00:08:37.233069052 +0000 UTC m=+3.601726618,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.216645 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbcebd0029a8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:37.2347642 +0000 UTC m=+3.603421766,LastTimestamp:2026-02-20 00:08:37.2347642 +0000 UTC m=+3.603421766,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.220904 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbcebd0dc3f1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:37.235655665 +0000 UTC m=+3.604313231,LastTimestamp:2026-02-20 00:08:37.235655665 +0000 UTC m=+3.604313231,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.225050 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbcec8b41caf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:37.431106735 +0000 UTC m=+3.799764301,LastTimestamp:2026-02-20 00:08:37.431106735 +0000 UTC m=+3.799764301,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.229276 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbceca29ba44 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:37.455592004 +0000 UTC m=+3.824249570,LastTimestamp:2026-02-20 00:08:37.455592004 +0000 UTC m=+3.824249570,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.236339 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbceca37f6aa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:37.45652497 +0000 UTC m=+3.825182536,LastTimestamp:2026-02-20 00:08:37.45652497 +0000 UTC m=+3.825182536,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.241334 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbcece558c53 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:37.525572691 +0000 UTC m=+3.894230267,LastTimestamp:2026-02-20 00:08:37.525572691 +0000 UTC m=+3.894230267,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.246959 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbced9013924 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:37.704595748 +0000 UTC m=+4.073253314,LastTimestamp:2026-02-20 00:08:37.704595748 +0000 UTC m=+4.073253314,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.252350 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbced9970142 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:37.714411842 +0000 UTC m=+4.083069408,LastTimestamp:2026-02-20 00:08:37.714411842 +0000 UTC m=+4.083069408,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.256207 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbcedc1e9842 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:37.75685229 +0000 UTC m=+4.125509856,LastTimestamp:2026-02-20 00:08:37.75685229 +0000 UTC m=+4.125509856,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.261181 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbcedd2f149e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:37.774709918 +0000 UTC m=+4.143367484,LastTimestamp:2026-02-20 00:08:37.774709918 +0000 UTC m=+4.143367484,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.266907 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbcf0b2eaa0c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:38.546434572 +0000 UTC m=+4.915092138,LastTimestamp:2026-02-20 00:08:38.546434572 +0000 UTC m=+4.915092138,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.272419 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbcf1877aa90 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:38.76932264 +0000 UTC m=+5.137980206,LastTimestamp:2026-02-20 00:08:38.76932264 +0000 UTC m=+5.137980206,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.276662 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbcf19f86ff9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:38.794539001 +0000 UTC m=+5.163196597,LastTimestamp:2026-02-20 00:08:38.794539001 +0000 UTC m=+5.163196597,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.281100 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbcf1a0f8446 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:38.796051526 +0000 UTC m=+5.164709122,LastTimestamp:2026-02-20 00:08:38.796051526 +0000 UTC m=+5.164709122,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.285772 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbcf28cf6165 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:39.043506533 +0000 UTC m=+5.412164099,LastTimestamp:2026-02-20 00:08:39.043506533 +0000 UTC m=+5.412164099,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.290742 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbcf29a4b9f8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:39.057488376 +0000 UTC m=+5.426145962,LastTimestamp:2026-02-20 00:08:39.057488376 +0000 UTC m=+5.426145962,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.295412 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbcf29b2521b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:39.058379291 +0000 UTC m=+5.427036867,LastTimestamp:2026-02-20 00:08:39.058379291 +0000 UTC m=+5.427036867,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.301560 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbcf3acfe488 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:39.345529992 +0000 UTC m=+5.714187598,LastTimestamp:2026-02-20 00:08:39.345529992 +0000 UTC m=+5.714187598,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.306581 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbcf3bc163e8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:39.361356776 +0000 UTC m=+5.730014352,LastTimestamp:2026-02-20 00:08:39.361356776 +0000 UTC m=+5.730014352,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.311548 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbcf3bd0dde3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:39.362371043 +0000 UTC m=+5.731028639,LastTimestamp:2026-02-20 00:08:39.362371043 +0000 UTC m=+5.731028639,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.319471 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbcf4a102564 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:39.60139914 +0000 UTC m=+5.970056706,LastTimestamp:2026-02-20 00:08:39.60139914 +0000 UTC m=+5.970056706,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.323782 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbcf4b342a39 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:39.620536889 +0000 UTC m=+5.989194455,LastTimestamp:2026-02-20 00:08:39.620536889 +0000 UTC m=+5.989194455,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.326230 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbcf4b4be431 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:39.622091825 +0000 UTC m=+5.990749391,LastTimestamp:2026-02-20 00:08:39.622091825 +0000 UTC m=+5.990749391,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.328315 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbcf5a4f0e83 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:39.873957507 +0000 UTC m=+6.242615103,LastTimestamp:2026-02-20 00:08:39.873957507 +0000 UTC m=+6.242615103,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.331360 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1895cbcf5b91c32e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:39.89510635 +0000 UTC m=+6.263763956,LastTimestamp:2026-02-20 00:08:39.89510635 +0000 UTC m=+6.263763956,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.334695 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 20 00:08:55 crc kubenswrapper[5107]: &Event{ObjectMeta:{kube-controller-manager-crc.1895cbd0f273ee0b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 20 00:08:55 crc kubenswrapper[5107]: body: Feb 20 00:08:55 crc kubenswrapper[5107]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:46.721478155 +0000 UTC m=+13.090135751,LastTimestamp:2026-02-20 00:08:46.721478155 +0000 UTC m=+13.090135751,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 20 00:08:55 crc kubenswrapper[5107]: > Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.339662 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1895cbd0f2759024 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:46.721585188 +0000 UTC m=+13.090242784,LastTimestamp:2026-02-20 00:08:46.721585188 +0000 UTC m=+13.090242784,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.345662 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 20 00:08:55 crc kubenswrapper[5107]: &Event{ObjectMeta:{kube-apiserver-crc.1895cbd1a99971d5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 20 00:08:55 crc kubenswrapper[5107]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 20 00:08:55 crc kubenswrapper[5107]: Feb 20 00:08:55 crc kubenswrapper[5107]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:49.794167253 +0000 UTC m=+16.162824859,LastTimestamp:2026-02-20 00:08:49.794167253 +0000 UTC m=+16.162824859,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 20 00:08:55 crc kubenswrapper[5107]: > Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.350622 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbd1a99a8dc9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:49.794239945 +0000 UTC m=+16.162897551,LastTimestamp:2026-02-20 00:08:49.794239945 +0000 UTC m=+16.162897551,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.357571 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1895cbd1a99971d5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 20 00:08:55 crc kubenswrapper[5107]: &Event{ObjectMeta:{kube-apiserver-crc.1895cbd1a99971d5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 20 00:08:55 crc kubenswrapper[5107]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 20 00:08:55 crc kubenswrapper[5107]: Feb 20 00:08:55 crc kubenswrapper[5107]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:49.794167253 +0000 UTC m=+16.162824859,LastTimestamp:2026-02-20 00:08:49.80677202 +0000 UTC m=+16.175429596,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 20 00:08:55 crc kubenswrapper[5107]: > Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.363747 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1895cbd1a99a8dc9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbd1a99a8dc9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:49.794239945 +0000 UTC m=+16.162897551,LastTimestamp:2026-02-20 00:08:49.806838182 +0000 UTC m=+16.175495758,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.370313 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 20 00:08:55 crc kubenswrapper[5107]: &Event{ObjectMeta:{kube-apiserver-crc.1895cbd2d73057ad openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:40386->192.168.126.11:17697: read: connection reset by peer Feb 20 00:08:55 crc kubenswrapper[5107]: body: Feb 20 00:08:55 crc kubenswrapper[5107]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:54.853998509 +0000 UTC m=+21.222656085,LastTimestamp:2026-02-20 00:08:54.853998509 +0000 UTC m=+21.222656085,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 20 00:08:55 crc kubenswrapper[5107]: > Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.377999 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbd2d7311afe openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:40386->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:54.85404851 +0000 UTC m=+21.222706086,LastTimestamp:2026-02-20 00:08:54.85404851 +0000 UTC m=+21.222706086,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: I0220 00:08:55.383362 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.383811 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 20 00:08:55 crc kubenswrapper[5107]: &Event{ObjectMeta:{kube-apiserver-crc.1895cbd2d731a9c5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:40398->192.168.126.11:17697: read: connection reset by peer Feb 20 00:08:55 crc kubenswrapper[5107]: body: Feb 20 00:08:55 crc kubenswrapper[5107]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:54.854085061 +0000 UTC m=+21.222742637,LastTimestamp:2026-02-20 00:08:54.854085061 +0000 UTC m=+21.222742637,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 20 00:08:55 crc kubenswrapper[5107]: > Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.389544 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbd2d731f340 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:40398->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:54.854103872 +0000 UTC m=+21.222761448,LastTimestamp:2026-02-20 00:08:54.854103872 +0000 UTC m=+21.222761448,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.396011 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 20 00:08:55 crc kubenswrapper[5107]: &Event{ObjectMeta:{kube-apiserver-crc.1895cbd2d7365db6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Feb 20 00:08:55 crc kubenswrapper[5107]: body: Feb 20 00:08:55 crc kubenswrapper[5107]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:54.85439327 +0000 UTC m=+21.223050856,LastTimestamp:2026-02-20 00:08:54.85439327 +0000 UTC m=+21.223050856,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 20 00:08:55 crc kubenswrapper[5107]: > Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.403067 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbd2d736cad0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:54.8544212 +0000 UTC m=+21.223078786,LastTimestamp:2026-02-20 00:08:54.8544212 +0000 UTC m=+21.223078786,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: I0220 00:08:55.604411 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Feb 20 00:08:55 crc kubenswrapper[5107]: I0220 00:08:55.606503 5107 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="e1d64ead20b75302d7fb87b5b0c61160ca334799313a0ad0cbd50aa7831cab08" exitCode=255 Feb 20 00:08:55 crc kubenswrapper[5107]: I0220 00:08:55.606578 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"e1d64ead20b75302d7fb87b5b0c61160ca334799313a0ad0cbd50aa7831cab08"} Feb 20 00:08:55 crc kubenswrapper[5107]: I0220 00:08:55.606759 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:55 crc kubenswrapper[5107]: I0220 00:08:55.606827 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:55 crc kubenswrapper[5107]: I0220 00:08:55.607459 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:55 crc kubenswrapper[5107]: I0220 00:08:55.607489 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:55 crc kubenswrapper[5107]: I0220 00:08:55.607502 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.607853 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:55 crc kubenswrapper[5107]: I0220 00:08:55.607903 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:55 crc kubenswrapper[5107]: I0220 00:08:55.607979 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:55 crc kubenswrapper[5107]: I0220 00:08:55.607998 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:55 crc kubenswrapper[5107]: I0220 00:08:55.608133 5107 scope.go:117] "RemoveContainer" containerID="e1d64ead20b75302d7fb87b5b0c61160ca334799313a0ad0cbd50aa7831cab08" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.609317 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.618324 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1895cbceca37f6aa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbceca37f6aa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:37.45652497 +0000 UTC m=+3.825182536,LastTimestamp:2026-02-20 00:08:55.610094461 +0000 UTC m=+21.978752027,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: I0220 00:08:55.798569 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 20 00:08:55 crc kubenswrapper[5107]: I0220 00:08:55.798818 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:55 crc kubenswrapper[5107]: I0220 00:08:55.799598 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:55 crc kubenswrapper[5107]: I0220 00:08:55.799645 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:55 crc kubenswrapper[5107]: I0220 00:08:55.799663 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.800160 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:55 crc kubenswrapper[5107]: I0220 00:08:55.815134 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.830113 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1895cbced9013924\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbced9013924 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:37.704595748 +0000 UTC m=+4.073253314,LastTimestamp:2026-02-20 00:08:55.823212387 +0000 UTC m=+22.191869953,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:55 crc kubenswrapper[5107]: E0220 00:08:55.838567 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1895cbced9970142\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbced9970142 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:37.714411842 +0000 UTC m=+4.083069408,LastTimestamp:2026-02-20 00:08:55.834281822 +0000 UTC m=+22.202939398,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:56 crc kubenswrapper[5107]: I0220 00:08:56.379883 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:08:56 crc kubenswrapper[5107]: I0220 00:08:56.610058 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Feb 20 00:08:56 crc kubenswrapper[5107]: I0220 00:08:56.611380 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"1428321bd932a8d3f8fd9f6b9d9f5ac3057604093b82b84ad96c6b99aeed15ec"} Feb 20 00:08:56 crc kubenswrapper[5107]: I0220 00:08:56.611479 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:56 crc kubenswrapper[5107]: I0220 00:08:56.611543 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:56 crc kubenswrapper[5107]: I0220 00:08:56.611995 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:56 crc kubenswrapper[5107]: I0220 00:08:56.612023 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:56 crc kubenswrapper[5107]: I0220 00:08:56.612024 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:56 crc kubenswrapper[5107]: I0220 00:08:56.612063 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:56 crc kubenswrapper[5107]: I0220 00:08:56.612075 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:56 crc kubenswrapper[5107]: I0220 00:08:56.612033 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:56 crc kubenswrapper[5107]: E0220 00:08:56.612602 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:56 crc kubenswrapper[5107]: E0220 00:08:56.612839 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:57 crc kubenswrapper[5107]: E0220 00:08:57.017021 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 20 00:08:57 crc kubenswrapper[5107]: I0220 00:08:57.379509 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:08:58 crc kubenswrapper[5107]: I0220 00:08:58.376418 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:08:58 crc kubenswrapper[5107]: I0220 00:08:58.617269 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 20 00:08:58 crc kubenswrapper[5107]: I0220 00:08:58.617741 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Feb 20 00:08:58 crc kubenswrapper[5107]: I0220 00:08:58.619428 5107 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="1428321bd932a8d3f8fd9f6b9d9f5ac3057604093b82b84ad96c6b99aeed15ec" exitCode=255 Feb 20 00:08:58 crc kubenswrapper[5107]: I0220 00:08:58.619500 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"1428321bd932a8d3f8fd9f6b9d9f5ac3057604093b82b84ad96c6b99aeed15ec"} Feb 20 00:08:58 crc kubenswrapper[5107]: I0220 00:08:58.619569 5107 scope.go:117] "RemoveContainer" containerID="e1d64ead20b75302d7fb87b5b0c61160ca334799313a0ad0cbd50aa7831cab08" Feb 20 00:08:58 crc kubenswrapper[5107]: I0220 00:08:58.619951 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:58 crc kubenswrapper[5107]: I0220 00:08:58.620634 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:58 crc kubenswrapper[5107]: I0220 00:08:58.620661 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:58 crc kubenswrapper[5107]: I0220 00:08:58.620672 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:58 crc kubenswrapper[5107]: E0220 00:08:58.621016 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:58 crc kubenswrapper[5107]: I0220 00:08:58.621268 5107 scope.go:117] "RemoveContainer" containerID="1428321bd932a8d3f8fd9f6b9d9f5ac3057604093b82b84ad96c6b99aeed15ec" Feb 20 00:08:58 crc kubenswrapper[5107]: E0220 00:08:58.621475 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 20 00:08:58 crc kubenswrapper[5107]: E0220 00:08:58.628098 5107 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbd3b7bf00a4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:58.62144426 +0000 UTC m=+24.990101826,LastTimestamp:2026-02-20 00:08:58.62144426 +0000 UTC m=+24.990101826,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:08:59 crc kubenswrapper[5107]: I0220 00:08:59.386456 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:08:59 crc kubenswrapper[5107]: I0220 00:08:59.624826 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 20 00:08:59 crc kubenswrapper[5107]: I0220 00:08:59.638208 5107 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:08:59 crc kubenswrapper[5107]: I0220 00:08:59.638414 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:08:59 crc kubenswrapper[5107]: I0220 00:08:59.639590 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:08:59 crc kubenswrapper[5107]: I0220 00:08:59.639649 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:08:59 crc kubenswrapper[5107]: I0220 00:08:59.639670 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:08:59 crc kubenswrapper[5107]: E0220 00:08:59.640203 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:08:59 crc kubenswrapper[5107]: I0220 00:08:59.640586 5107 scope.go:117] "RemoveContainer" containerID="1428321bd932a8d3f8fd9f6b9d9f5ac3057604093b82b84ad96c6b99aeed15ec" Feb 20 00:08:59 crc kubenswrapper[5107]: E0220 00:08:59.640897 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 20 00:08:59 crc kubenswrapper[5107]: E0220 00:08:59.647830 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1895cbd3b7bf00a4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbd3b7bf00a4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:58.62144426 +0000 UTC m=+24.990101826,LastTimestamp:2026-02-20 00:08:59.640857271 +0000 UTC m=+26.009514847,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:09:00 crc kubenswrapper[5107]: I0220 00:09:00.375927 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:00 crc kubenswrapper[5107]: E0220 00:09:00.700730 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 20 00:09:01 crc kubenswrapper[5107]: I0220 00:09:01.207104 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:09:01 crc kubenswrapper[5107]: I0220 00:09:01.208849 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:09:01 crc kubenswrapper[5107]: I0220 00:09:01.208936 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:09:01 crc kubenswrapper[5107]: I0220 00:09:01.208957 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:09:01 crc kubenswrapper[5107]: I0220 00:09:01.209416 5107 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 20 00:09:01 crc kubenswrapper[5107]: E0220 00:09:01.217616 5107 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 20 00:09:01 crc kubenswrapper[5107]: I0220 00:09:01.381912 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:02 crc kubenswrapper[5107]: I0220 00:09:02.380219 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:03 crc kubenswrapper[5107]: I0220 00:09:03.374093 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:04 crc kubenswrapper[5107]: E0220 00:09:04.023192 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 20 00:09:04 crc kubenswrapper[5107]: I0220 00:09:04.379345 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:04 crc kubenswrapper[5107]: E0220 00:09:04.545732 5107 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 20 00:09:05 crc kubenswrapper[5107]: E0220 00:09:05.046068 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 20 00:09:05 crc kubenswrapper[5107]: I0220 00:09:05.376986 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:05 crc kubenswrapper[5107]: E0220 00:09:05.564573 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 20 00:09:06 crc kubenswrapper[5107]: I0220 00:09:06.378522 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:06 crc kubenswrapper[5107]: I0220 00:09:06.612814 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:09:06 crc kubenswrapper[5107]: I0220 00:09:06.613510 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:09:06 crc kubenswrapper[5107]: I0220 00:09:06.614786 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:09:06 crc kubenswrapper[5107]: I0220 00:09:06.614856 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:09:06 crc kubenswrapper[5107]: I0220 00:09:06.614876 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:09:06 crc kubenswrapper[5107]: E0220 00:09:06.615729 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:09:06 crc kubenswrapper[5107]: I0220 00:09:06.616234 5107 scope.go:117] "RemoveContainer" containerID="1428321bd932a8d3f8fd9f6b9d9f5ac3057604093b82b84ad96c6b99aeed15ec" Feb 20 00:09:06 crc kubenswrapper[5107]: E0220 00:09:06.616802 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 20 00:09:06 crc kubenswrapper[5107]: E0220 00:09:06.624438 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1895cbd3b7bf00a4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbd3b7bf00a4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:58.62144426 +0000 UTC m=+24.990101826,LastTimestamp:2026-02-20 00:09:06.616522601 +0000 UTC m=+32.985180197,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:09:07 crc kubenswrapper[5107]: E0220 00:09:07.192335 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 20 00:09:07 crc kubenswrapper[5107]: I0220 00:09:07.379029 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:08 crc kubenswrapper[5107]: I0220 00:09:08.218014 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:09:08 crc kubenswrapper[5107]: I0220 00:09:08.219216 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:09:08 crc kubenswrapper[5107]: I0220 00:09:08.219268 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:09:08 crc kubenswrapper[5107]: I0220 00:09:08.219289 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:09:08 crc kubenswrapper[5107]: I0220 00:09:08.219325 5107 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 20 00:09:08 crc kubenswrapper[5107]: E0220 00:09:08.235614 5107 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 20 00:09:08 crc kubenswrapper[5107]: I0220 00:09:08.377477 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:09 crc kubenswrapper[5107]: I0220 00:09:09.378523 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:10 crc kubenswrapper[5107]: I0220 00:09:10.379539 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:11 crc kubenswrapper[5107]: E0220 00:09:11.031428 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 20 00:09:11 crc kubenswrapper[5107]: I0220 00:09:11.380528 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:12 crc kubenswrapper[5107]: I0220 00:09:12.379847 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:13 crc kubenswrapper[5107]: I0220 00:09:13.378252 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:14 crc kubenswrapper[5107]: I0220 00:09:14.377532 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:14 crc kubenswrapper[5107]: E0220 00:09:14.546497 5107 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 20 00:09:15 crc kubenswrapper[5107]: I0220 00:09:15.236243 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:09:15 crc kubenswrapper[5107]: I0220 00:09:15.237392 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:09:15 crc kubenswrapper[5107]: I0220 00:09:15.237418 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:09:15 crc kubenswrapper[5107]: I0220 00:09:15.237440 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:09:15 crc kubenswrapper[5107]: I0220 00:09:15.237460 5107 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 20 00:09:15 crc kubenswrapper[5107]: E0220 00:09:15.251525 5107 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 20 00:09:15 crc kubenswrapper[5107]: I0220 00:09:15.379980 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:16 crc kubenswrapper[5107]: I0220 00:09:16.376593 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:17 crc kubenswrapper[5107]: I0220 00:09:17.377194 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:18 crc kubenswrapper[5107]: E0220 00:09:18.039511 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 20 00:09:18 crc kubenswrapper[5107]: I0220 00:09:18.383230 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:18 crc kubenswrapper[5107]: I0220 00:09:18.485340 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:09:18 crc kubenswrapper[5107]: I0220 00:09:18.486433 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:09:18 crc kubenswrapper[5107]: I0220 00:09:18.486476 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:09:18 crc kubenswrapper[5107]: I0220 00:09:18.486491 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:09:18 crc kubenswrapper[5107]: E0220 00:09:18.486944 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:09:18 crc kubenswrapper[5107]: I0220 00:09:18.487263 5107 scope.go:117] "RemoveContainer" containerID="1428321bd932a8d3f8fd9f6b9d9f5ac3057604093b82b84ad96c6b99aeed15ec" Feb 20 00:09:18 crc kubenswrapper[5107]: E0220 00:09:18.497514 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1895cbceca37f6aa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbceca37f6aa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:37.45652497 +0000 UTC m=+3.825182536,LastTimestamp:2026-02-20 00:09:18.488893436 +0000 UTC m=+44.857551012,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:09:18 crc kubenswrapper[5107]: E0220 00:09:18.730665 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1895cbced9013924\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbced9013924 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:37.704595748 +0000 UTC m=+4.073253314,LastTimestamp:2026-02-20 00:09:18.721342905 +0000 UTC m=+45.090000521,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:09:18 crc kubenswrapper[5107]: E0220 00:09:18.741625 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1895cbced9970142\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbced9970142 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:37.714411842 +0000 UTC m=+4.083069408,LastTimestamp:2026-02-20 00:09:18.733802968 +0000 UTC m=+45.102460574,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:09:19 crc kubenswrapper[5107]: I0220 00:09:19.377118 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:19 crc kubenswrapper[5107]: E0220 00:09:19.419474 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 20 00:09:19 crc kubenswrapper[5107]: I0220 00:09:19.684748 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 20 00:09:19 crc kubenswrapper[5107]: I0220 00:09:19.687031 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"c304a46bd446a681aace142c5669fdb1187ce8706444db6261213cf0f0796480"} Feb 20 00:09:19 crc kubenswrapper[5107]: I0220 00:09:19.687455 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:09:19 crc kubenswrapper[5107]: I0220 00:09:19.688249 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:09:19 crc kubenswrapper[5107]: I0220 00:09:19.688305 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:09:19 crc kubenswrapper[5107]: I0220 00:09:19.688330 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:09:19 crc kubenswrapper[5107]: E0220 00:09:19.688893 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:09:20 crc kubenswrapper[5107]: I0220 00:09:20.380373 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:20 crc kubenswrapper[5107]: I0220 00:09:20.692922 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 20 00:09:20 crc kubenswrapper[5107]: I0220 00:09:20.694661 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 20 00:09:20 crc kubenswrapper[5107]: I0220 00:09:20.697735 5107 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="c304a46bd446a681aace142c5669fdb1187ce8706444db6261213cf0f0796480" exitCode=255 Feb 20 00:09:20 crc kubenswrapper[5107]: I0220 00:09:20.697840 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"c304a46bd446a681aace142c5669fdb1187ce8706444db6261213cf0f0796480"} Feb 20 00:09:20 crc kubenswrapper[5107]: I0220 00:09:20.698165 5107 scope.go:117] "RemoveContainer" containerID="1428321bd932a8d3f8fd9f6b9d9f5ac3057604093b82b84ad96c6b99aeed15ec" Feb 20 00:09:20 crc kubenswrapper[5107]: I0220 00:09:20.698405 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:09:20 crc kubenswrapper[5107]: I0220 00:09:20.699381 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:09:20 crc kubenswrapper[5107]: I0220 00:09:20.699441 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:09:20 crc kubenswrapper[5107]: I0220 00:09:20.699463 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:09:20 crc kubenswrapper[5107]: E0220 00:09:20.699992 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:09:20 crc kubenswrapper[5107]: I0220 00:09:20.700527 5107 scope.go:117] "RemoveContainer" containerID="c304a46bd446a681aace142c5669fdb1187ce8706444db6261213cf0f0796480" Feb 20 00:09:20 crc kubenswrapper[5107]: E0220 00:09:20.700909 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 20 00:09:20 crc kubenswrapper[5107]: E0220 00:09:20.709699 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1895cbd3b7bf00a4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbd3b7bf00a4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:58.62144426 +0000 UTC m=+24.990101826,LastTimestamp:2026-02-20 00:09:20.700859724 +0000 UTC m=+47.069517330,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:09:21 crc kubenswrapper[5107]: I0220 00:09:21.379211 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:21 crc kubenswrapper[5107]: I0220 00:09:21.703833 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 20 00:09:22 crc kubenswrapper[5107]: E0220 00:09:22.062788 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 20 00:09:22 crc kubenswrapper[5107]: I0220 00:09:22.252676 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:09:22 crc kubenswrapper[5107]: I0220 00:09:22.254865 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:09:22 crc kubenswrapper[5107]: I0220 00:09:22.254935 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:09:22 crc kubenswrapper[5107]: I0220 00:09:22.254958 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:09:22 crc kubenswrapper[5107]: I0220 00:09:22.254999 5107 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 20 00:09:22 crc kubenswrapper[5107]: E0220 00:09:22.266829 5107 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 20 00:09:22 crc kubenswrapper[5107]: I0220 00:09:22.379716 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:23 crc kubenswrapper[5107]: I0220 00:09:23.376586 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:24 crc kubenswrapper[5107]: I0220 00:09:24.378673 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:24 crc kubenswrapper[5107]: E0220 00:09:24.546882 5107 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 20 00:09:25 crc kubenswrapper[5107]: E0220 00:09:25.045591 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 20 00:09:25 crc kubenswrapper[5107]: I0220 00:09:25.378097 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:26 crc kubenswrapper[5107]: I0220 00:09:26.375298 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:27 crc kubenswrapper[5107]: I0220 00:09:27.378214 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:28 crc kubenswrapper[5107]: I0220 00:09:28.377329 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:29 crc kubenswrapper[5107]: E0220 00:09:29.162739 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 20 00:09:29 crc kubenswrapper[5107]: I0220 00:09:29.267010 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:09:29 crc kubenswrapper[5107]: I0220 00:09:29.268389 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:09:29 crc kubenswrapper[5107]: I0220 00:09:29.268475 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:09:29 crc kubenswrapper[5107]: I0220 00:09:29.268492 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:09:29 crc kubenswrapper[5107]: I0220 00:09:29.268542 5107 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 20 00:09:29 crc kubenswrapper[5107]: E0220 00:09:29.283278 5107 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 20 00:09:29 crc kubenswrapper[5107]: I0220 00:09:29.380472 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:29 crc kubenswrapper[5107]: E0220 00:09:29.511639 5107 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 20 00:09:29 crc kubenswrapper[5107]: I0220 00:09:29.557450 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 20 00:09:29 crc kubenswrapper[5107]: I0220 00:09:29.558131 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:09:29 crc kubenswrapper[5107]: I0220 00:09:29.559602 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:09:29 crc kubenswrapper[5107]: I0220 00:09:29.559910 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:09:29 crc kubenswrapper[5107]: I0220 00:09:29.560116 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:09:29 crc kubenswrapper[5107]: E0220 00:09:29.560859 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:09:29 crc kubenswrapper[5107]: I0220 00:09:29.638672 5107 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:09:29 crc kubenswrapper[5107]: I0220 00:09:29.638903 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:09:29 crc kubenswrapper[5107]: I0220 00:09:29.639743 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:09:29 crc kubenswrapper[5107]: I0220 00:09:29.640016 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:09:29 crc kubenswrapper[5107]: I0220 00:09:29.640340 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:09:29 crc kubenswrapper[5107]: E0220 00:09:29.641212 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:09:29 crc kubenswrapper[5107]: I0220 00:09:29.641969 5107 scope.go:117] "RemoveContainer" containerID="c304a46bd446a681aace142c5669fdb1187ce8706444db6261213cf0f0796480" Feb 20 00:09:29 crc kubenswrapper[5107]: E0220 00:09:29.642542 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 20 00:09:29 crc kubenswrapper[5107]: E0220 00:09:29.651119 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1895cbd3b7bf00a4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbd3b7bf00a4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:58.62144426 +0000 UTC m=+24.990101826,LastTimestamp:2026-02-20 00:09:29.642484747 +0000 UTC m=+56.011142353,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:09:29 crc kubenswrapper[5107]: I0220 00:09:29.688426 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:09:29 crc kubenswrapper[5107]: I0220 00:09:29.728594 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:09:29 crc kubenswrapper[5107]: I0220 00:09:29.729240 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:09:29 crc kubenswrapper[5107]: I0220 00:09:29.729273 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:09:29 crc kubenswrapper[5107]: I0220 00:09:29.729285 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:09:29 crc kubenswrapper[5107]: E0220 00:09:29.729644 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:09:29 crc kubenswrapper[5107]: I0220 00:09:29.729930 5107 scope.go:117] "RemoveContainer" containerID="c304a46bd446a681aace142c5669fdb1187ce8706444db6261213cf0f0796480" Feb 20 00:09:29 crc kubenswrapper[5107]: E0220 00:09:29.730195 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 20 00:09:29 crc kubenswrapper[5107]: E0220 00:09:29.738783 5107 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1895cbd3b7bf00a4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1895cbd3b7bf00a4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:08:58.62144426 +0000 UTC m=+24.990101826,LastTimestamp:2026-02-20 00:09:29.73013812 +0000 UTC m=+56.098795696,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:09:30 crc kubenswrapper[5107]: I0220 00:09:30.378947 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:31 crc kubenswrapper[5107]: I0220 00:09:31.379534 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:32 crc kubenswrapper[5107]: E0220 00:09:32.051814 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 20 00:09:32 crc kubenswrapper[5107]: I0220 00:09:32.376888 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:33 crc kubenswrapper[5107]: I0220 00:09:33.373463 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:34 crc kubenswrapper[5107]: I0220 00:09:34.376727 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:34 crc kubenswrapper[5107]: E0220 00:09:34.547503 5107 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 20 00:09:35 crc kubenswrapper[5107]: I0220 00:09:35.377409 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:36 crc kubenswrapper[5107]: I0220 00:09:36.284280 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:09:36 crc kubenswrapper[5107]: I0220 00:09:36.286382 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:09:36 crc kubenswrapper[5107]: I0220 00:09:36.286612 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:09:36 crc kubenswrapper[5107]: I0220 00:09:36.286639 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:09:36 crc kubenswrapper[5107]: I0220 00:09:36.286677 5107 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 20 00:09:36 crc kubenswrapper[5107]: E0220 00:09:36.301681 5107 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 20 00:09:36 crc kubenswrapper[5107]: I0220 00:09:36.380002 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:37 crc kubenswrapper[5107]: I0220 00:09:37.379132 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:38 crc kubenswrapper[5107]: I0220 00:09:38.376829 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:39 crc kubenswrapper[5107]: E0220 00:09:39.060449 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 20 00:09:39 crc kubenswrapper[5107]: I0220 00:09:39.380011 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:40 crc kubenswrapper[5107]: I0220 00:09:40.376579 5107 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 00:09:41 crc kubenswrapper[5107]: I0220 00:09:41.190919 5107 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-8l8cm" Feb 20 00:09:41 crc kubenswrapper[5107]: I0220 00:09:41.198366 5107 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-8l8cm" Feb 20 00:09:41 crc kubenswrapper[5107]: I0220 00:09:41.308498 5107 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 20 00:09:42 crc kubenswrapper[5107]: I0220 00:09:42.187524 5107 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 20 00:09:42 crc kubenswrapper[5107]: I0220 00:09:42.200205 5107 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-03-22 00:04:41 +0000 UTC" deadline="2026-03-18 22:22:02.35124568 +0000 UTC" Feb 20 00:09:42 crc kubenswrapper[5107]: I0220 00:09:42.200260 5107 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="646h12m20.150991733s" Feb 20 00:09:43 crc kubenswrapper[5107]: I0220 00:09:43.302432 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:09:43 crc kubenswrapper[5107]: I0220 00:09:43.303359 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:09:43 crc kubenswrapper[5107]: I0220 00:09:43.303416 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:09:43 crc kubenswrapper[5107]: I0220 00:09:43.303431 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:09:43 crc kubenswrapper[5107]: I0220 00:09:43.303545 5107 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 20 00:09:43 crc kubenswrapper[5107]: I0220 00:09:43.316264 5107 kubelet_node_status.go:127] "Node was previously registered" node="crc" Feb 20 00:09:43 crc kubenswrapper[5107]: I0220 00:09:43.316566 5107 kubelet_node_status.go:81] "Successfully registered node" node="crc" Feb 20 00:09:43 crc kubenswrapper[5107]: E0220 00:09:43.316590 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 20 00:09:43 crc kubenswrapper[5107]: I0220 00:09:43.319475 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:09:43 crc kubenswrapper[5107]: I0220 00:09:43.319506 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:09:43 crc kubenswrapper[5107]: I0220 00:09:43.319521 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:09:43 crc kubenswrapper[5107]: I0220 00:09:43.319538 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:09:43 crc kubenswrapper[5107]: I0220 00:09:43.319548 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:09:43Z","lastTransitionTime":"2026-02-20T00:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:09:43 crc kubenswrapper[5107]: E0220 00:09:43.331091 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:09:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:09:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:09:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:09:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cb8ca53b-411e-4259-ae0d-d078aa1f4c50\\\",\\\"systemUUID\\\":\\\"3738b857-e068-44b2-8a5a-d59e1fffbda6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:09:43 crc kubenswrapper[5107]: I0220 00:09:43.337942 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:09:43 crc kubenswrapper[5107]: I0220 00:09:43.337990 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:09:43 crc kubenswrapper[5107]: I0220 00:09:43.338004 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:09:43 crc kubenswrapper[5107]: I0220 00:09:43.338023 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:09:43 crc kubenswrapper[5107]: I0220 00:09:43.338038 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:09:43Z","lastTransitionTime":"2026-02-20T00:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:09:43 crc kubenswrapper[5107]: E0220 00:09:43.348052 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:09:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:09:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:09:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:09:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cb8ca53b-411e-4259-ae0d-d078aa1f4c50\\\",\\\"systemUUID\\\":\\\"3738b857-e068-44b2-8a5a-d59e1fffbda6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:09:43 crc kubenswrapper[5107]: I0220 00:09:43.355097 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:09:43 crc kubenswrapper[5107]: I0220 00:09:43.355193 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:09:43 crc kubenswrapper[5107]: I0220 00:09:43.355210 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:09:43 crc kubenswrapper[5107]: I0220 00:09:43.355252 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:09:43 crc kubenswrapper[5107]: I0220 00:09:43.355268 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:09:43Z","lastTransitionTime":"2026-02-20T00:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:09:43 crc kubenswrapper[5107]: E0220 00:09:43.364989 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:09:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:09:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:09:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:09:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cb8ca53b-411e-4259-ae0d-d078aa1f4c50\\\",\\\"systemUUID\\\":\\\"3738b857-e068-44b2-8a5a-d59e1fffbda6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:09:43 crc kubenswrapper[5107]: I0220 00:09:43.372302 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:09:43 crc kubenswrapper[5107]: I0220 00:09:43.372350 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:09:43 crc kubenswrapper[5107]: I0220 00:09:43.372368 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:09:43 crc kubenswrapper[5107]: I0220 00:09:43.372385 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:09:43 crc kubenswrapper[5107]: I0220 00:09:43.372393 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:09:43Z","lastTransitionTime":"2026-02-20T00:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:09:43 crc kubenswrapper[5107]: E0220 00:09:43.382264 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:09:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:09:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:09:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:09:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cb8ca53b-411e-4259-ae0d-d078aa1f4c50\\\",\\\"systemUUID\\\":\\\"3738b857-e068-44b2-8a5a-d59e1fffbda6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:09:43 crc kubenswrapper[5107]: E0220 00:09:43.382457 5107 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 20 00:09:43 crc kubenswrapper[5107]: E0220 00:09:43.382492 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:43 crc kubenswrapper[5107]: E0220 00:09:43.483508 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:43 crc kubenswrapper[5107]: E0220 00:09:43.584655 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:43 crc kubenswrapper[5107]: E0220 00:09:43.685013 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:43 crc kubenswrapper[5107]: E0220 00:09:43.785640 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:43 crc kubenswrapper[5107]: E0220 00:09:43.886097 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:43 crc kubenswrapper[5107]: E0220 00:09:43.986896 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:44 crc kubenswrapper[5107]: E0220 00:09:44.087012 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:44 crc kubenswrapper[5107]: E0220 00:09:44.188202 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:44 crc kubenswrapper[5107]: E0220 00:09:44.288890 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:44 crc kubenswrapper[5107]: E0220 00:09:44.389816 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:44 crc kubenswrapper[5107]: I0220 00:09:44.485847 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:09:44 crc kubenswrapper[5107]: I0220 00:09:44.486899 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:09:44 crc kubenswrapper[5107]: I0220 00:09:44.486932 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:09:44 crc kubenswrapper[5107]: I0220 00:09:44.486941 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:09:44 crc kubenswrapper[5107]: E0220 00:09:44.487331 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:09:44 crc kubenswrapper[5107]: I0220 00:09:44.487542 5107 scope.go:117] "RemoveContainer" containerID="c304a46bd446a681aace142c5669fdb1187ce8706444db6261213cf0f0796480" Feb 20 00:09:44 crc kubenswrapper[5107]: E0220 00:09:44.490222 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:44 crc kubenswrapper[5107]: E0220 00:09:44.548758 5107 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 20 00:09:44 crc kubenswrapper[5107]: E0220 00:09:44.591111 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:44 crc kubenswrapper[5107]: E0220 00:09:44.691951 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:44 crc kubenswrapper[5107]: I0220 00:09:44.766890 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 20 00:09:44 crc kubenswrapper[5107]: I0220 00:09:44.768473 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"dedae9d10992c0717bf9a6a55742b97566a7e6ea9660a223cd9df127ca3dc627"} Feb 20 00:09:44 crc kubenswrapper[5107]: I0220 00:09:44.768748 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:09:44 crc kubenswrapper[5107]: I0220 00:09:44.769470 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:09:44 crc kubenswrapper[5107]: I0220 00:09:44.769628 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:09:44 crc kubenswrapper[5107]: I0220 00:09:44.769745 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:09:44 crc kubenswrapper[5107]: E0220 00:09:44.770424 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:09:44 crc kubenswrapper[5107]: E0220 00:09:44.792689 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:44 crc kubenswrapper[5107]: E0220 00:09:44.893076 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:44 crc kubenswrapper[5107]: E0220 00:09:44.994175 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:45 crc kubenswrapper[5107]: E0220 00:09:45.095313 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:45 crc kubenswrapper[5107]: E0220 00:09:45.196130 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:45 crc kubenswrapper[5107]: E0220 00:09:45.296572 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:45 crc kubenswrapper[5107]: E0220 00:09:45.396703 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:45 crc kubenswrapper[5107]: E0220 00:09:45.496965 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:45 crc kubenswrapper[5107]: E0220 00:09:45.597354 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:45 crc kubenswrapper[5107]: E0220 00:09:45.698031 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:45 crc kubenswrapper[5107]: E0220 00:09:45.798921 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:45 crc kubenswrapper[5107]: E0220 00:09:45.899448 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:46 crc kubenswrapper[5107]: E0220 00:09:46.000257 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:46 crc kubenswrapper[5107]: E0220 00:09:46.100779 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:46 crc kubenswrapper[5107]: E0220 00:09:46.201777 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:46 crc kubenswrapper[5107]: E0220 00:09:46.301949 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:46 crc kubenswrapper[5107]: E0220 00:09:46.402450 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:46 crc kubenswrapper[5107]: E0220 00:09:46.503314 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:46 crc kubenswrapper[5107]: E0220 00:09:46.604430 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:46 crc kubenswrapper[5107]: E0220 00:09:46.704534 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:46 crc kubenswrapper[5107]: I0220 00:09:46.774244 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 20 00:09:46 crc kubenswrapper[5107]: I0220 00:09:46.774816 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 20 00:09:46 crc kubenswrapper[5107]: I0220 00:09:46.777032 5107 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="dedae9d10992c0717bf9a6a55742b97566a7e6ea9660a223cd9df127ca3dc627" exitCode=255 Feb 20 00:09:46 crc kubenswrapper[5107]: I0220 00:09:46.777073 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"dedae9d10992c0717bf9a6a55742b97566a7e6ea9660a223cd9df127ca3dc627"} Feb 20 00:09:46 crc kubenswrapper[5107]: I0220 00:09:46.777157 5107 scope.go:117] "RemoveContainer" containerID="c304a46bd446a681aace142c5669fdb1187ce8706444db6261213cf0f0796480" Feb 20 00:09:46 crc kubenswrapper[5107]: I0220 00:09:46.777315 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:09:46 crc kubenswrapper[5107]: I0220 00:09:46.777893 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:09:46 crc kubenswrapper[5107]: I0220 00:09:46.777926 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:09:46 crc kubenswrapper[5107]: I0220 00:09:46.777939 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:09:46 crc kubenswrapper[5107]: E0220 00:09:46.778352 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:09:46 crc kubenswrapper[5107]: I0220 00:09:46.778571 5107 scope.go:117] "RemoveContainer" containerID="dedae9d10992c0717bf9a6a55742b97566a7e6ea9660a223cd9df127ca3dc627" Feb 20 00:09:46 crc kubenswrapper[5107]: E0220 00:09:46.778743 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 20 00:09:46 crc kubenswrapper[5107]: E0220 00:09:46.805374 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:46 crc kubenswrapper[5107]: E0220 00:09:46.906345 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:47 crc kubenswrapper[5107]: E0220 00:09:47.007344 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:47 crc kubenswrapper[5107]: E0220 00:09:47.107768 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:47 crc kubenswrapper[5107]: E0220 00:09:47.208292 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:47 crc kubenswrapper[5107]: E0220 00:09:47.309000 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:47 crc kubenswrapper[5107]: E0220 00:09:47.410187 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:47 crc kubenswrapper[5107]: E0220 00:09:47.510355 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:47 crc kubenswrapper[5107]: E0220 00:09:47.610750 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:47 crc kubenswrapper[5107]: E0220 00:09:47.710910 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:47 crc kubenswrapper[5107]: I0220 00:09:47.781407 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 20 00:09:47 crc kubenswrapper[5107]: E0220 00:09:47.811491 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:47 crc kubenswrapper[5107]: E0220 00:09:47.912123 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:48 crc kubenswrapper[5107]: E0220 00:09:48.013105 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:48 crc kubenswrapper[5107]: E0220 00:09:48.113598 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:48 crc kubenswrapper[5107]: E0220 00:09:48.214084 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:48 crc kubenswrapper[5107]: E0220 00:09:48.314376 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:48 crc kubenswrapper[5107]: E0220 00:09:48.415340 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:48 crc kubenswrapper[5107]: E0220 00:09:48.516347 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:48 crc kubenswrapper[5107]: E0220 00:09:48.616606 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:48 crc kubenswrapper[5107]: E0220 00:09:48.717728 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:48 crc kubenswrapper[5107]: E0220 00:09:48.818343 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:48 crc kubenswrapper[5107]: E0220 00:09:48.918741 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:49 crc kubenswrapper[5107]: E0220 00:09:49.019526 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:49 crc kubenswrapper[5107]: E0220 00:09:49.119788 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:49 crc kubenswrapper[5107]: E0220 00:09:49.220649 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:49 crc kubenswrapper[5107]: E0220 00:09:49.320755 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:49 crc kubenswrapper[5107]: E0220 00:09:49.420856 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:49 crc kubenswrapper[5107]: E0220 00:09:49.521577 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:49 crc kubenswrapper[5107]: E0220 00:09:49.622768 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:49 crc kubenswrapper[5107]: I0220 00:09:49.638104 5107 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:09:49 crc kubenswrapper[5107]: I0220 00:09:49.638541 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:09:49 crc kubenswrapper[5107]: I0220 00:09:49.639667 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:09:49 crc kubenswrapper[5107]: I0220 00:09:49.639756 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:09:49 crc kubenswrapper[5107]: I0220 00:09:49.639784 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:09:49 crc kubenswrapper[5107]: E0220 00:09:49.641286 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:09:49 crc kubenswrapper[5107]: I0220 00:09:49.641680 5107 scope.go:117] "RemoveContainer" containerID="dedae9d10992c0717bf9a6a55742b97566a7e6ea9660a223cd9df127ca3dc627" Feb 20 00:09:49 crc kubenswrapper[5107]: E0220 00:09:49.642324 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 20 00:09:49 crc kubenswrapper[5107]: E0220 00:09:49.722914 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:49 crc kubenswrapper[5107]: E0220 00:09:49.823851 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:49 crc kubenswrapper[5107]: E0220 00:09:49.923999 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:50 crc kubenswrapper[5107]: E0220 00:09:50.025213 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:50 crc kubenswrapper[5107]: E0220 00:09:50.126389 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:50 crc kubenswrapper[5107]: E0220 00:09:50.227338 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:50 crc kubenswrapper[5107]: E0220 00:09:50.328426 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:50 crc kubenswrapper[5107]: E0220 00:09:50.429419 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:50 crc kubenswrapper[5107]: E0220 00:09:50.530470 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:50 crc kubenswrapper[5107]: E0220 00:09:50.631422 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:50 crc kubenswrapper[5107]: E0220 00:09:50.732392 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:50 crc kubenswrapper[5107]: E0220 00:09:50.833360 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:50 crc kubenswrapper[5107]: E0220 00:09:50.933716 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:51 crc kubenswrapper[5107]: E0220 00:09:51.034024 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:51 crc kubenswrapper[5107]: E0220 00:09:51.134922 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:51 crc kubenswrapper[5107]: E0220 00:09:51.235759 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:51 crc kubenswrapper[5107]: E0220 00:09:51.337193 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:51 crc kubenswrapper[5107]: E0220 00:09:51.438342 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:51 crc kubenswrapper[5107]: E0220 00:09:51.538796 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:51 crc kubenswrapper[5107]: E0220 00:09:51.639534 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:51 crc kubenswrapper[5107]: E0220 00:09:51.740194 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:51 crc kubenswrapper[5107]: E0220 00:09:51.841420 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:51 crc kubenswrapper[5107]: E0220 00:09:51.942336 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:52 crc kubenswrapper[5107]: E0220 00:09:52.042720 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:52 crc kubenswrapper[5107]: E0220 00:09:52.143222 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:52 crc kubenswrapper[5107]: E0220 00:09:52.243812 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:52 crc kubenswrapper[5107]: E0220 00:09:52.344742 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:52 crc kubenswrapper[5107]: E0220 00:09:52.445634 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:52 crc kubenswrapper[5107]: E0220 00:09:52.546199 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:52 crc kubenswrapper[5107]: E0220 00:09:52.647234 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:52 crc kubenswrapper[5107]: E0220 00:09:52.748242 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:52 crc kubenswrapper[5107]: E0220 00:09:52.848683 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:52 crc kubenswrapper[5107]: E0220 00:09:52.949279 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:53 crc kubenswrapper[5107]: E0220 00:09:53.049720 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:53 crc kubenswrapper[5107]: E0220 00:09:53.149989 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:53 crc kubenswrapper[5107]: E0220 00:09:53.250508 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:53 crc kubenswrapper[5107]: E0220 00:09:53.351443 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:53 crc kubenswrapper[5107]: E0220 00:09:53.452068 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:53 crc kubenswrapper[5107]: E0220 00:09:53.552766 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:53 crc kubenswrapper[5107]: E0220 00:09:53.653634 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:53 crc kubenswrapper[5107]: E0220 00:09:53.754382 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:53 crc kubenswrapper[5107]: E0220 00:09:53.774778 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 20 00:09:53 crc kubenswrapper[5107]: I0220 00:09:53.781126 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:09:53 crc kubenswrapper[5107]: I0220 00:09:53.781206 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:09:53 crc kubenswrapper[5107]: I0220 00:09:53.781226 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:09:53 crc kubenswrapper[5107]: I0220 00:09:53.781251 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:09:53 crc kubenswrapper[5107]: I0220 00:09:53.781271 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:09:53Z","lastTransitionTime":"2026-02-20T00:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:09:53 crc kubenswrapper[5107]: E0220 00:09:53.797869 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:09:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:09:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:09:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:09:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:09:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:09:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:09:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:09:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cb8ca53b-411e-4259-ae0d-d078aa1f4c50\\\",\\\"systemUUID\\\":\\\"3738b857-e068-44b2-8a5a-d59e1fffbda6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:09:53 crc kubenswrapper[5107]: I0220 00:09:53.808403 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:09:53 crc kubenswrapper[5107]: I0220 00:09:53.808462 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:09:53 crc kubenswrapper[5107]: I0220 00:09:53.808480 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:09:53 crc kubenswrapper[5107]: I0220 00:09:53.808504 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:09:53 crc kubenswrapper[5107]: I0220 00:09:53.808523 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:09:53Z","lastTransitionTime":"2026-02-20T00:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:09:53 crc kubenswrapper[5107]: E0220 00:09:53.824662 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:09:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:09:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:09:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:09:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:09:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:09:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:09:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:09:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cb8ca53b-411e-4259-ae0d-d078aa1f4c50\\\",\\\"systemUUID\\\":\\\"3738b857-e068-44b2-8a5a-d59e1fffbda6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:09:53 crc kubenswrapper[5107]: I0220 00:09:53.836629 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:09:53 crc kubenswrapper[5107]: I0220 00:09:53.836718 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:09:53 crc kubenswrapper[5107]: I0220 00:09:53.836746 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:09:53 crc kubenswrapper[5107]: I0220 00:09:53.836772 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:09:53 crc kubenswrapper[5107]: I0220 00:09:53.836790 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:09:53Z","lastTransitionTime":"2026-02-20T00:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:09:53 crc kubenswrapper[5107]: E0220 00:09:53.851875 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:09:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:09:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:09:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:09:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:09:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:09:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:09:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:09:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cb8ca53b-411e-4259-ae0d-d078aa1f4c50\\\",\\\"systemUUID\\\":\\\"3738b857-e068-44b2-8a5a-d59e1fffbda6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:09:53 crc kubenswrapper[5107]: I0220 00:09:53.863501 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:09:53 crc kubenswrapper[5107]: I0220 00:09:53.863562 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:09:53 crc kubenswrapper[5107]: I0220 00:09:53.863584 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:09:53 crc kubenswrapper[5107]: I0220 00:09:53.863607 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:09:53 crc kubenswrapper[5107]: I0220 00:09:53.863623 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:09:53Z","lastTransitionTime":"2026-02-20T00:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:09:53 crc kubenswrapper[5107]: E0220 00:09:53.879627 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:09:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:09:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:09:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:09:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:09:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:09:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:09:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:09:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cb8ca53b-411e-4259-ae0d-d078aa1f4c50\\\",\\\"systemUUID\\\":\\\"3738b857-e068-44b2-8a5a-d59e1fffbda6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:09:53 crc kubenswrapper[5107]: E0220 00:09:53.880291 5107 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 20 00:09:53 crc kubenswrapper[5107]: E0220 00:09:53.880461 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:53 crc kubenswrapper[5107]: E0220 00:09:53.981477 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:54 crc kubenswrapper[5107]: I0220 00:09:54.036769 5107 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Feb 20 00:09:54 crc kubenswrapper[5107]: E0220 00:09:54.082041 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:54 crc kubenswrapper[5107]: E0220 00:09:54.183284 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:54 crc kubenswrapper[5107]: E0220 00:09:54.283640 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:54 crc kubenswrapper[5107]: E0220 00:09:54.384502 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:54 crc kubenswrapper[5107]: E0220 00:09:54.485285 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:54 crc kubenswrapper[5107]: E0220 00:09:54.549592 5107 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 20 00:09:54 crc kubenswrapper[5107]: E0220 00:09:54.586302 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:54 crc kubenswrapper[5107]: E0220 00:09:54.686758 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:54 crc kubenswrapper[5107]: I0220 00:09:54.769756 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:09:54 crc kubenswrapper[5107]: I0220 00:09:54.770428 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:09:54 crc kubenswrapper[5107]: I0220 00:09:54.771357 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:09:54 crc kubenswrapper[5107]: I0220 00:09:54.771417 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:09:54 crc kubenswrapper[5107]: I0220 00:09:54.771437 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:09:54 crc kubenswrapper[5107]: E0220 00:09:54.772073 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:09:54 crc kubenswrapper[5107]: I0220 00:09:54.772493 5107 scope.go:117] "RemoveContainer" containerID="dedae9d10992c0717bf9a6a55742b97566a7e6ea9660a223cd9df127ca3dc627" Feb 20 00:09:54 crc kubenswrapper[5107]: E0220 00:09:54.772799 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 20 00:09:54 crc kubenswrapper[5107]: E0220 00:09:54.787497 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:54 crc kubenswrapper[5107]: E0220 00:09:54.888620 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:54 crc kubenswrapper[5107]: E0220 00:09:54.989798 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:55 crc kubenswrapper[5107]: E0220 00:09:55.090412 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:55 crc kubenswrapper[5107]: E0220 00:09:55.191312 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:55 crc kubenswrapper[5107]: E0220 00:09:55.291704 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:55 crc kubenswrapper[5107]: E0220 00:09:55.392362 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:55 crc kubenswrapper[5107]: E0220 00:09:55.493443 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:55 crc kubenswrapper[5107]: E0220 00:09:55.594520 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:55 crc kubenswrapper[5107]: E0220 00:09:55.694945 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:55 crc kubenswrapper[5107]: E0220 00:09:55.795285 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:55 crc kubenswrapper[5107]: E0220 00:09:55.896497 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:55 crc kubenswrapper[5107]: E0220 00:09:55.996664 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:56 crc kubenswrapper[5107]: E0220 00:09:56.096741 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:56 crc kubenswrapper[5107]: E0220 00:09:56.196817 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:56 crc kubenswrapper[5107]: E0220 00:09:56.297236 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:56 crc kubenswrapper[5107]: E0220 00:09:56.398130 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:56 crc kubenswrapper[5107]: E0220 00:09:56.498602 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:56 crc kubenswrapper[5107]: E0220 00:09:56.599493 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:56 crc kubenswrapper[5107]: E0220 00:09:56.700863 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:56 crc kubenswrapper[5107]: E0220 00:09:56.802167 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:56 crc kubenswrapper[5107]: E0220 00:09:56.903130 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:57 crc kubenswrapper[5107]: E0220 00:09:57.004215 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:57 crc kubenswrapper[5107]: E0220 00:09:57.105262 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:57 crc kubenswrapper[5107]: E0220 00:09:57.205736 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:57 crc kubenswrapper[5107]: E0220 00:09:57.306202 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:57 crc kubenswrapper[5107]: E0220 00:09:57.406558 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:57 crc kubenswrapper[5107]: I0220 00:09:57.486020 5107 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 20 00:09:57 crc kubenswrapper[5107]: I0220 00:09:57.486890 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:09:57 crc kubenswrapper[5107]: I0220 00:09:57.486928 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:09:57 crc kubenswrapper[5107]: I0220 00:09:57.486942 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:09:57 crc kubenswrapper[5107]: E0220 00:09:57.487319 5107 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 20 00:09:57 crc kubenswrapper[5107]: E0220 00:09:57.507707 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:57 crc kubenswrapper[5107]: E0220 00:09:57.608199 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:57 crc kubenswrapper[5107]: E0220 00:09:57.708324 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:57 crc kubenswrapper[5107]: E0220 00:09:57.808750 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:57 crc kubenswrapper[5107]: E0220 00:09:57.909334 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:58 crc kubenswrapper[5107]: E0220 00:09:58.010261 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:58 crc kubenswrapper[5107]: E0220 00:09:58.111454 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:58 crc kubenswrapper[5107]: E0220 00:09:58.212289 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:58 crc kubenswrapper[5107]: E0220 00:09:58.313222 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:58 crc kubenswrapper[5107]: E0220 00:09:58.414006 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:58 crc kubenswrapper[5107]: E0220 00:09:58.514655 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:58 crc kubenswrapper[5107]: E0220 00:09:58.615034 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:58 crc kubenswrapper[5107]: E0220 00:09:58.716221 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:58 crc kubenswrapper[5107]: E0220 00:09:58.816632 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:58 crc kubenswrapper[5107]: E0220 00:09:58.917781 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:59 crc kubenswrapper[5107]: E0220 00:09:59.018743 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:59 crc kubenswrapper[5107]: E0220 00:09:59.119959 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:59 crc kubenswrapper[5107]: E0220 00:09:59.220966 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:59 crc kubenswrapper[5107]: E0220 00:09:59.321609 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:59 crc kubenswrapper[5107]: E0220 00:09:59.422403 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:59 crc kubenswrapper[5107]: E0220 00:09:59.523279 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:59 crc kubenswrapper[5107]: E0220 00:09:59.623861 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:59 crc kubenswrapper[5107]: E0220 00:09:59.724472 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:59 crc kubenswrapper[5107]: E0220 00:09:59.825561 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:09:59 crc kubenswrapper[5107]: E0220 00:09:59.925919 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:10:00 crc kubenswrapper[5107]: E0220 00:10:00.027489 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:10:00 crc kubenswrapper[5107]: E0220 00:10:00.127663 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:10:00 crc kubenswrapper[5107]: E0220 00:10:00.229036 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:10:00 crc kubenswrapper[5107]: E0220 00:10:00.330287 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:10:00 crc kubenswrapper[5107]: E0220 00:10:00.431189 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:10:00 crc kubenswrapper[5107]: E0220 00:10:00.531346 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:10:00 crc kubenswrapper[5107]: E0220 00:10:00.632802 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:10:00 crc kubenswrapper[5107]: E0220 00:10:00.733418 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:10:00 crc kubenswrapper[5107]: E0220 00:10:00.834413 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:10:00 crc kubenswrapper[5107]: E0220 00:10:00.935675 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:10:01 crc kubenswrapper[5107]: E0220 00:10:01.035901 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:10:01 crc kubenswrapper[5107]: E0220 00:10:01.137594 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:10:01 crc kubenswrapper[5107]: E0220 00:10:01.238660 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:10:01 crc kubenswrapper[5107]: E0220 00:10:01.339709 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:10:01 crc kubenswrapper[5107]: E0220 00:10:01.440702 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:10:01 crc kubenswrapper[5107]: E0220 00:10:01.541602 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:10:01 crc kubenswrapper[5107]: E0220 00:10:01.642052 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:10:01 crc kubenswrapper[5107]: E0220 00:10:01.742752 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:10:01 crc kubenswrapper[5107]: E0220 00:10:01.843642 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:10:01 crc kubenswrapper[5107]: E0220 00:10:01.944455 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:10:02 crc kubenswrapper[5107]: E0220 00:10:02.044635 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:10:02 crc kubenswrapper[5107]: E0220 00:10:02.145697 5107 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.149776 5107 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.196122 5107 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.208222 5107 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.247952 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.248028 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.248053 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.248084 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.248106 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:02Z","lastTransitionTime":"2026-02-20T00:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.310581 5107 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.350225 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.350281 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.350301 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.350330 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.350348 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:02Z","lastTransitionTime":"2026-02-20T00:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.399234 5107 apiserver.go:52] "Watching apiserver" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.406847 5107 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.407619 5107 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.407721 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-5jnd7","openshift-dns/node-resolver-kkvfh","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-network-node-identity/network-node-identity-dgvkt","openshift-ovn-kubernetes/ovnkube-node-glc89","openshift-etcd/etcd-crc","openshift-image-registry/node-ca-4wch6","openshift-multus/multus-additional-cni-plugins-jh827","openshift-multus/network-metrics-daemon-j2l2p","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bf9fp","openshift-machine-config-operator/machine-config-daemon-5bqkx","openshift-multus/multus-fnskd","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-diagnostics/network-check-target-fhkjl","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv"] Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.409210 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.410062 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 20 00:10:02 crc kubenswrapper[5107]: E0220 00:10:02.410193 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.411055 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 20 00:10:02 crc kubenswrapper[5107]: E0220 00:10:02.411174 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.413194 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.413374 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.419246 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.419619 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.419648 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.419863 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.420075 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.420338 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:10:02 crc kubenswrapper[5107]: E0220 00:10:02.420490 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.420804 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-jh827" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.442580 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.442724 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.442602 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.443281 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.446349 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.448028 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.448098 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.448420 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.453294 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.454650 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j2l2p" Feb 20 00:10:02 crc kubenswrapper[5107]: E0220 00:10:02.454798 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j2l2p" podUID="cee716c2-1a9a-4944-9b9f-06284973b167" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.486685 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.487130 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.487184 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.487235 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.487402 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.487429 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.487894 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.487930 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.487942 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.487959 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.487971 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:02Z","lastTransitionTime":"2026-02-20T00:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.488169 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.488198 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.488228 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.491918 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.492848 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.494285 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.495290 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-kkvfh" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.499448 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-4wch6" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.502231 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.502292 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bf9fp" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.503532 5107 scope.go:117] "RemoveContainer" containerID="dedae9d10992c0717bf9a6a55742b97566a7e6ea9660a223cd9df127ca3dc627" Feb 20 00:10:02 crc kubenswrapper[5107]: E0220 00:10:02.503715 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.507114 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.507395 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.507501 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.507398 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.507727 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.507856 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.508085 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.508622 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.508906 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.509216 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.509385 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.509555 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.509385 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.509780 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.509997 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.510424 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.513045 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j2l2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cee716c2-1a9a-4944-9b9f-06284973b167\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:10:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j2l2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.513286 5107 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.515025 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.529818 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.539401 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.549476 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.553831 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.553871 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-etc-kubernetes\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.553896 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-systemd-units\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.553919 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.553941 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.553962 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c9d08e95-6328-4e97-aab4-4dd9913914cc-multus-daemon-config\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: E0220 00:10:02.553975 5107 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.553982 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8a250015-5489-46f5-95c6-fc5d7f565bb8-host\") pod \"node-ca-4wch6\" (UID: \"8a250015-5489-46f5-95c6-fc5d7f565bb8\") " pod="openshift-image-registry/node-ca-4wch6" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.554006 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 20 00:10:02 crc kubenswrapper[5107]: E0220 00:10:02.554034 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-20 00:10:03.054018766 +0000 UTC m=+89.422676332 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.554061 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-etc-openvswitch\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.554077 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-log-socket\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.554092 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-env-overrides\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.554109 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wnk8\" (UniqueName: \"kubernetes.io/projected/cde7379e-ed60-484c-996d-71c37cce9fd0-kube-api-access-5wnk8\") pod \"node-resolver-kkvfh\" (UID: \"cde7379e-ed60-484c-996d-71c37cce9fd0\") " pod="openshift-dns/node-resolver-kkvfh" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.554124 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffhj8\" (UniqueName: \"kubernetes.io/projected/325c1728-1be4-421f-9dcb-514bea2da8b7-kube-api-access-ffhj8\") pod \"multus-additional-cni-plugins-jh827\" (UID: \"325c1728-1be4-421f-9dcb-514bea2da8b7\") " pod="openshift-multus/multus-additional-cni-plugins-jh827" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.554155 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdsm6\" (UniqueName: \"kubernetes.io/projected/2a8cc693-438e-4d3b-8865-7d3907f9dc78-kube-api-access-xdsm6\") pod \"machine-config-daemon-5bqkx\" (UID: \"2a8cc693-438e-4d3b-8865-7d3907f9dc78\") " pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.554176 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/cde7379e-ed60-484c-996d-71c37cce9fd0-tmp-dir\") pod \"node-resolver-kkvfh\" (UID: \"cde7379e-ed60-484c-996d-71c37cce9fd0\") " pod="openshift-dns/node-resolver-kkvfh" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.554190 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/325c1728-1be4-421f-9dcb-514bea2da8b7-cni-binary-copy\") pod \"multus-additional-cni-plugins-jh827\" (UID: \"325c1728-1be4-421f-9dcb-514bea2da8b7\") " pod="openshift-multus/multus-additional-cni-plugins-jh827" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.554203 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-os-release\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.554217 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-hostroot\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.554230 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-multus-conf-dir\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.554246 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-run-ovn-kubernetes\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.554259 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/325c1728-1be4-421f-9dcb-514bea2da8b7-system-cni-dir\") pod \"multus-additional-cni-plugins-jh827\" (UID: \"325c1728-1be4-421f-9dcb-514bea2da8b7\") " pod="openshift-multus/multus-additional-cni-plugins-jh827" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.554272 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/325c1728-1be4-421f-9dcb-514bea2da8b7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jh827\" (UID: \"325c1728-1be4-421f-9dcb-514bea2da8b7\") " pod="openshift-multus/multus-additional-cni-plugins-jh827" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.554285 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/325c1728-1be4-421f-9dcb-514bea2da8b7-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-jh827\" (UID: \"325c1728-1be4-421f-9dcb-514bea2da8b7\") " pod="openshift-multus/multus-additional-cni-plugins-jh827" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.554298 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-system-cni-dir\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.554317 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-multus-cni-dir\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.554335 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-host-run-netns\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.554415 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-node-log\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.554453 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8a250015-5489-46f5-95c6-fc5d7f565bb8-serviceca\") pod \"node-ca-4wch6\" (UID: \"8a250015-5489-46f5-95c6-fc5d7f565bb8\") " pod="openshift-image-registry/node-ca-4wch6" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.554481 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.554519 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2a8cc693-438e-4d3b-8865-7d3907f9dc78-rootfs\") pod \"machine-config-daemon-5bqkx\" (UID: \"2a8cc693-438e-4d3b-8865-7d3907f9dc78\") " pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.554544 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.554737 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-slash\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.554788 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-run-systemd\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.554806 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-cni-bin\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.554842 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.554878 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-host-var-lib-kubelet\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.554970 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snz9c\" (UniqueName: \"kubernetes.io/projected/c9d08e95-6328-4e97-aab4-4dd9913914cc-kube-api-access-snz9c\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.555029 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-run-openvswitch\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.555087 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/325c1728-1be4-421f-9dcb-514bea2da8b7-cnibin\") pod \"multus-additional-cni-plugins-jh827\" (UID: \"325c1728-1be4-421f-9dcb-514bea2da8b7\") " pod="openshift-multus/multus-additional-cni-plugins-jh827" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.555128 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhm7m\" (UniqueName: \"kubernetes.io/projected/cee716c2-1a9a-4944-9b9f-06284973b167-kube-api-access-fhm7m\") pod \"network-metrics-daemon-j2l2p\" (UID: \"cee716c2-1a9a-4944-9b9f-06284973b167\") " pod="openshift-multus/network-metrics-daemon-j2l2p" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.555220 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2a8cc693-438e-4d3b-8865-7d3907f9dc78-proxy-tls\") pod \"machine-config-daemon-5bqkx\" (UID: \"2a8cc693-438e-4d3b-8865-7d3907f9dc78\") " pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.555237 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c9d08e95-6328-4e97-aab4-4dd9913914cc-cni-binary-copy\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.555252 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-var-lib-openvswitch\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.555283 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-ovn-node-metrics-cert\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.555283 5107 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.555298 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmr8l\" (UniqueName: \"kubernetes.io/projected/8a250015-5489-46f5-95c6-fc5d7f565bb8-kube-api-access-vmr8l\") pod \"node-ca-4wch6\" (UID: \"8a250015-5489-46f5-95c6-fc5d7f565bb8\") " pod="openshift-image-registry/node-ca-4wch6" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.555318 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cee716c2-1a9a-4944-9b9f-06284973b167-metrics-certs\") pod \"network-metrics-daemon-j2l2p\" (UID: \"cee716c2-1a9a-4944-9b9f-06284973b167\") " pod="openshift-multus/network-metrics-daemon-j2l2p" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.555333 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-host-run-k8s-cni-cncf-io\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.555370 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.555389 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.555442 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-cni-netd\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.555514 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.555565 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.555664 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.555711 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-multus-socket-dir-parent\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.556613 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.557060 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.557167 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-kubelet\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.557239 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/325c1728-1be4-421f-9dcb-514bea2da8b7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jh827\" (UID: \"325c1728-1be4-421f-9dcb-514bea2da8b7\") " pod="openshift-multus/multus-additional-cni-plugins-jh827" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.557281 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.557325 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-ovnkube-config\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.557358 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhdnk\" (UniqueName: \"kubernetes.io/projected/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-kube-api-access-qhdnk\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.557395 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.557445 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2a8cc693-438e-4d3b-8865-7d3907f9dc78-mcd-auth-proxy-config\") pod \"machine-config-daemon-5bqkx\" (UID: \"2a8cc693-438e-4d3b-8865-7d3907f9dc78\") " pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.557525 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-cnibin\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.557557 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-host-var-lib-cni-bin\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.557593 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-ovnkube-script-lib\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.557627 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/cde7379e-ed60-484c-996d-71c37cce9fd0-hosts-file\") pod \"node-resolver-kkvfh\" (UID: \"cde7379e-ed60-484c-996d-71c37cce9fd0\") " pod="openshift-dns/node-resolver-kkvfh" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.557659 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/325c1728-1be4-421f-9dcb-514bea2da8b7-os-release\") pod \"multus-additional-cni-plugins-jh827\" (UID: \"325c1728-1be4-421f-9dcb-514bea2da8b7\") " pod="openshift-multus/multus-additional-cni-plugins-jh827" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.557697 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.557780 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-host-var-lib-cni-multus\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.557847 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-host-run-multus-certs\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: E0220 00:10:02.557893 5107 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 20 00:10:02 crc kubenswrapper[5107]: E0220 00:10:02.557998 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-20 00:10:03.057970455 +0000 UTC m=+89.426628061 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.558131 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-run-netns\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.558226 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-run-ovn\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.559840 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.560896 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.567193 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.567621 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 20 00:10:02 crc kubenswrapper[5107]: E0220 00:10:02.569912 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 20 00:10:02 crc kubenswrapper[5107]: E0220 00:10:02.569935 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 20 00:10:02 crc kubenswrapper[5107]: E0220 00:10:02.569949 5107 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:10:02 crc kubenswrapper[5107]: E0220 00:10:02.570011 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-20 00:10:03.069993739 +0000 UTC m=+89.438651305 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.572606 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.573615 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a8cc693-438e-4d3b-8865-7d3907f9dc78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:10:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5bqkx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:02 crc kubenswrapper[5107]: E0220 00:10:02.573985 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 20 00:10:02 crc kubenswrapper[5107]: E0220 00:10:02.574025 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 20 00:10:02 crc kubenswrapper[5107]: E0220 00:10:02.574048 5107 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:10:02 crc kubenswrapper[5107]: E0220 00:10:02.574192 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-20 00:10:03.074114924 +0000 UTC m=+89.442772530 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.579935 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.583382 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.585313 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-fnskd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9d08e95-6328-4e97-aab4-4dd9913914cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snz9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:10:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fnskd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.589863 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.589904 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.589914 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.589933 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.589942 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:02Z","lastTransitionTime":"2026-02-20T00:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.594104 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.598077 5107 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.607626 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.607852 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.618877 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh827" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325c1728-1be4-421f-9dcb-514bea2da8b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:10:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh827\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.628388 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b227e532-3580-458d-a999-376346b587d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ffb40cfb5387637dd74538e38d6fb34e7cc8da65f8ad2eff1895f6909ca0c654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:08:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://016de0626e1bb48e4a214e94d6f0fbe89c072d510e904bc496f85ef33fa1ccbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:08:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1346c22559f69486b1be71c79b1b5d84a95c66686bd7804f6040906fd83e3d99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:08:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e73f5bd7fd072c39cd086b8592d379aa14a144a641a7a82a2c04d566c4c7010f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:08:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:08:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.638425 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh827" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325c1728-1be4-421f-9dcb-514bea2da8b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:10:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh827\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.645204 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bf9fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff511768-9c0a-4c27-a386-24c9cd8c4eac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzt7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzt7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:10:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-bf9fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.652902 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j2l2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cee716c2-1a9a-4944-9b9f-06284973b167\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:10:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j2l2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.658769 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.658812 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.658839 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.658862 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.658883 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.658906 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.658929 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.658950 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.658972 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.658993 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.659018 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.659067 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.659099 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.659123 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.659179 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.659220 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.659257 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.659288 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.659318 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.659348 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.659380 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.659415 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.659447 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.659478 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.659510 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.659542 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.659642 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.659681 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.659713 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.659743 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.659777 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.659809 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.659839 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.659874 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.659909 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.659937 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.659967 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.660005 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.660019 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.660045 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.660087 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.660131 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.660188 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.660200 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.660311 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.660352 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.660380 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.660404 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.660430 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.660455 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.660477 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.660540 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.660566 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.660588 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.660611 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.660634 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.660665 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.660704 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.661899 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.661959 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.662001 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.662041 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.662075 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.662109 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.662164 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.662199 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.662311 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.662349 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.662391 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.662424 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.662457 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.662491 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.662523 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.662557 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.662593 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.662631 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.662674 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.662709 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.662745 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.662780 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.662813 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.662854 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.662888 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.662922 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.662956 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.662990 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.663029 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.663067 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.663101 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.664414 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.664457 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.664488 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.664516 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.664541 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.664566 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.664593 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.664619 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.664646 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.664667 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.664724 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.664753 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.664779 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.664804 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.664834 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.664866 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.664891 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.664916 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.664943 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.664968 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.664993 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.665019 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.665045 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.665073 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.665103 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.665128 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.665169 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.665200 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.665226 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.665251 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.665276 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.665302 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.665331 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.665358 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.665385 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.665519 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.665551 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.665579 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.665607 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.665633 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.665663 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.665695 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.665726 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.665752 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.665776 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.665804 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.665829 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.665860 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.665890 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.665919 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.665943 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.665981 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666007 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666036 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666067 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666096 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666125 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666168 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666197 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666227 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666254 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666280 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666306 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666334 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666363 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666392 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666419 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666447 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666478 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666506 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666532 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666561 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666592 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666621 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666648 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666675 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666703 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666730 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666760 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666790 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666820 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666853 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666881 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666909 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666937 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666964 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666991 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.667020 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.667050 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.667082 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.667113 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.667162 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.667194 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.667227 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.667256 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.667283 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.667375 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.667411 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.667439 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.667469 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.667498 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.667526 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.667556 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.667596 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.667656 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.667686 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.667715 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.667831 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.667864 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.667894 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.668306 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.668353 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.660627 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.660799 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.661125 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.660974 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.661773 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.661898 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.661890 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.662507 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.662770 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.663289 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.671858 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.663392 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.663213 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.663684 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.663880 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.664230 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.664391 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.671971 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.664801 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.665022 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.665193 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.665561 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.665599 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.665934 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.672114 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.665995 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666015 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666258 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666290 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666164 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666711 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666511 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666746 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.666888 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.667165 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.667378 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.667310 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.667490 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.667527 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.667685 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.667829 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.668083 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: E0220 00:10:02.668402 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:03.168378534 +0000 UTC m=+89.537036100 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.672338 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.672377 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.672408 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.672437 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.672473 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.672502 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.672527 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.672553 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.672583 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.672648 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.672677 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.672704 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.672732 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.672761 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.672777 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.672792 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.672824 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.672823 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.672885 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.672923 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.672953 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.672954 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.672981 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.673016 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.673044 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.673072 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.673094 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.673106 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.673272 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-host-var-lib-cni-multus\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.673309 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-host-run-multus-certs\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.673351 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-run-netns\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.673380 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-run-ovn\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.673428 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-etc-kubernetes\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.673425 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.673460 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-systemd-units\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.673511 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.673539 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c9d08e95-6328-4e97-aab4-4dd9913914cc-multus-daemon-config\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.673564 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8a250015-5489-46f5-95c6-fc5d7f565bb8-host\") pod \"node-ca-4wch6\" (UID: \"8a250015-5489-46f5-95c6-fc5d7f565bb8\") " pod="openshift-image-registry/node-ca-4wch6" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.673622 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-etc-openvswitch\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.673649 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-log-socket\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.673757 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-env-overrides\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.673760 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.673788 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5wnk8\" (UniqueName: \"kubernetes.io/projected/cde7379e-ed60-484c-996d-71c37cce9fd0-kube-api-access-5wnk8\") pod \"node-resolver-kkvfh\" (UID: \"cde7379e-ed60-484c-996d-71c37cce9fd0\") " pod="openshift-dns/node-resolver-kkvfh" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.673807 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.673820 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ffhj8\" (UniqueName: \"kubernetes.io/projected/325c1728-1be4-421f-9dcb-514bea2da8b7-kube-api-access-ffhj8\") pod \"multus-additional-cni-plugins-jh827\" (UID: \"325c1728-1be4-421f-9dcb-514bea2da8b7\") " pod="openshift-multus/multus-additional-cni-plugins-jh827" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.673849 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xdsm6\" (UniqueName: \"kubernetes.io/projected/2a8cc693-438e-4d3b-8865-7d3907f9dc78-kube-api-access-xdsm6\") pod \"machine-config-daemon-5bqkx\" (UID: \"2a8cc693-438e-4d3b-8865-7d3907f9dc78\") " pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.673878 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/cde7379e-ed60-484c-996d-71c37cce9fd0-tmp-dir\") pod \"node-resolver-kkvfh\" (UID: \"cde7379e-ed60-484c-996d-71c37cce9fd0\") " pod="openshift-dns/node-resolver-kkvfh" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.673897 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/325c1728-1be4-421f-9dcb-514bea2da8b7-cni-binary-copy\") pod \"multus-additional-cni-plugins-jh827\" (UID: \"325c1728-1be4-421f-9dcb-514bea2da8b7\") " pod="openshift-multus/multus-additional-cni-plugins-jh827" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.673938 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-os-release\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.673942 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.673958 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-hostroot\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.673997 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.673990 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-hostroot\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.674106 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.668507 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.668623 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.668915 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.668862 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.669068 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.669065 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.669367 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.669398 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.669541 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.669708 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.669731 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.669816 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.669854 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.669932 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.669996 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.670407 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.670509 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.670660 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.670715 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.670504 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.670860 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.670981 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.671320 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.671332 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.671380 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.671700 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.674197 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.674289 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.674312 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.674105 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-multus-conf-dir\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.674393 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-run-ovn-kubernetes\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.674428 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/325c1728-1be4-421f-9dcb-514bea2da8b7-system-cni-dir\") pod \"multus-additional-cni-plugins-jh827\" (UID: \"325c1728-1be4-421f-9dcb-514bea2da8b7\") " pod="openshift-multus/multus-additional-cni-plugins-jh827" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.674454 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/325c1728-1be4-421f-9dcb-514bea2da8b7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jh827\" (UID: \"325c1728-1be4-421f-9dcb-514bea2da8b7\") " pod="openshift-multus/multus-additional-cni-plugins-jh827" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.674484 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/325c1728-1be4-421f-9dcb-514bea2da8b7-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-jh827\" (UID: \"325c1728-1be4-421f-9dcb-514bea2da8b7\") " pod="openshift-multus/multus-additional-cni-plugins-jh827" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.674515 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-system-cni-dir\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.674542 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-multus-cni-dir\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.674567 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-host-run-netns\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.674604 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-node-log\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.674633 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8a250015-5489-46f5-95c6-fc5d7f565bb8-serviceca\") pod \"node-ca-4wch6\" (UID: \"8a250015-5489-46f5-95c6-fc5d7f565bb8\") " pod="openshift-image-registry/node-ca-4wch6" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.674990 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2a8cc693-438e-4d3b-8865-7d3907f9dc78-rootfs\") pod \"machine-config-daemon-5bqkx\" (UID: \"2a8cc693-438e-4d3b-8865-7d3907f9dc78\") " pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.675714 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-slash\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.675771 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-run-systemd\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.675814 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-cni-bin\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.675890 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-host-var-lib-kubelet\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.675932 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-snz9c\" (UniqueName: \"kubernetes.io/projected/c9d08e95-6328-4e97-aab4-4dd9913914cc-kube-api-access-snz9c\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.675974 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-run-openvswitch\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.676013 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/325c1728-1be4-421f-9dcb-514bea2da8b7-cnibin\") pod \"multus-additional-cni-plugins-jh827\" (UID: \"325c1728-1be4-421f-9dcb-514bea2da8b7\") " pod="openshift-multus/multus-additional-cni-plugins-jh827" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.676057 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fhm7m\" (UniqueName: \"kubernetes.io/projected/cee716c2-1a9a-4944-9b9f-06284973b167-kube-api-access-fhm7m\") pod \"network-metrics-daemon-j2l2p\" (UID: \"cee716c2-1a9a-4944-9b9f-06284973b167\") " pod="openshift-multus/network-metrics-daemon-j2l2p" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.676097 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2a8cc693-438e-4d3b-8865-7d3907f9dc78-proxy-tls\") pod \"machine-config-daemon-5bqkx\" (UID: \"2a8cc693-438e-4d3b-8865-7d3907f9dc78\") " pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.676137 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c9d08e95-6328-4e97-aab4-4dd9913914cc-cni-binary-copy\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.676236 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8a250015-5489-46f5-95c6-fc5d7f565bb8-serviceca\") pod \"node-ca-4wch6\" (UID: \"8a250015-5489-46f5-95c6-fc5d7f565bb8\") " pod="openshift-image-registry/node-ca-4wch6" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.676259 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-var-lib-openvswitch\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.676813 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/325c1728-1be4-421f-9dcb-514bea2da8b7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jh827\" (UID: \"325c1728-1be4-421f-9dcb-514bea2da8b7\") " pod="openshift-multus/multus-additional-cni-plugins-jh827" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.677277 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/325c1728-1be4-421f-9dcb-514bea2da8b7-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-jh827\" (UID: \"325c1728-1be4-421f-9dcb-514bea2da8b7\") " pod="openshift-multus/multus-additional-cni-plugins-jh827" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.674481 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.674585 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.674925 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.675009 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.675195 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.675217 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.675408 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.675432 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.675448 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.675483 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.675571 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.675584 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.675843 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.675885 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.676045 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.676061 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.676544 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.676742 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.676891 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.677128 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.677220 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.677381 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.677400 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-system-cni-dir\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.677443 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-multus-cni-dir\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.677462 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-host-run-netns\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.677490 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-node-log\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.677533 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.677727 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.677772 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.677765 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.677981 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.678014 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.678021 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.678063 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.678082 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.678232 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.678378 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.678463 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.678500 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.678602 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.678612 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.678713 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.678720 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.679528 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.678754 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.678781 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.678885 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.678949 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.679843 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2a8cc693-438e-4d3b-8865-7d3907f9dc78-rootfs\") pod \"machine-config-daemon-5bqkx\" (UID: \"2a8cc693-438e-4d3b-8865-7d3907f9dc78\") " pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.680345 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.680514 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.680756 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.680810 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.680997 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.681022 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.681123 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.681414 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.681434 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.681470 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.681713 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.681801 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.681957 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.682041 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.682178 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-slash\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.682267 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-run-systemd\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.682280 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.682337 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-cni-bin\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.682363 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.682419 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.682458 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.682540 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.682623 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.683034 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.683131 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.683150 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.683162 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.683283 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.683307 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.683302 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"734f5043-9693-4cf9-82ef-85aaa490c018\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ab97dc99887641495dad3cb72be9ca892429d9f760851d511ba737c10aa533c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://14649d9c6bacb041e78519e3f952045ad2764afa541e79ac4ebbba3f56a9fe60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c7191b4213be9fa380280dfde810cd85aad6b56fefb3983b43b1e50eca05dfa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d6485423a2d33593eca47b7ecb17b0fd5cc6b1952a7ed37022254df9c794a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd2b23ba1c015bb90e717507a601c198e1441b517ca59751798a2438d6162355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5d8e833acadd77049f21cdf569b596c3527f1a08d0fc3f94ffc6e0246d99b234\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8e833acadd77049f21cdf569b596c3527f1a08d0fc3f94ffc6e0246d99b234\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-20T00:08:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-20T00:08:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://1554df061f39fc77ea8d400750c1c8dd190bd9c2803471eba36e8d53b96965d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1554df061f39fc77ea8d400750c1c8dd190bd9c2803471eba36e8d53b96965d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-20T00:08:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-20T00:08:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://3a01a0e88db61e63096ae1b23bc61fb503908fc79092380bea076ede86f03f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a01a0e88db61e63096ae1b23bc61fb503908fc79092380bea076ede86f03f5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-20T00:08:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-20T00:08:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:08:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.683455 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.683526 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.683588 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.683684 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-host-var-lib-cni-multus\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.683768 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-os-release\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.683899 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-var-lib-openvswitch\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.674175 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-multus-conf-dir\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.683955 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.683970 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-ovn-node-metrics-cert\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.684016 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vmr8l\" (UniqueName: \"kubernetes.io/projected/8a250015-5489-46f5-95c6-fc5d7f565bb8-kube-api-access-vmr8l\") pod \"node-ca-4wch6\" (UID: \"8a250015-5489-46f5-95c6-fc5d7f565bb8\") " pod="openshift-image-registry/node-ca-4wch6" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.684049 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cee716c2-1a9a-4944-9b9f-06284973b167-metrics-certs\") pod \"network-metrics-daemon-j2l2p\" (UID: \"cee716c2-1a9a-4944-9b9f-06284973b167\") " pod="openshift-multus/network-metrics-daemon-j2l2p" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.684071 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-host-var-lib-kubelet\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.684080 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-host-run-k8s-cni-cncf-io\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.684170 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ff511768-9c0a-4c27-a386-24c9cd8c4eac-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-bf9fp\" (UID: \"ff511768-9c0a-4c27-a386-24c9cd8c4eac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bf9fp" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.684219 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-cni-netd\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.684225 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.684304 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/325c1728-1be4-421f-9dcb-514bea2da8b7-cnibin\") pod \"multus-additional-cni-plugins-jh827\" (UID: \"325c1728-1be4-421f-9dcb-514bea2da8b7\") " pod="openshift-multus/multus-additional-cni-plugins-jh827" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.684330 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.684362 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-run-ovn-kubernetes\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.684387 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/325c1728-1be4-421f-9dcb-514bea2da8b7-system-cni-dir\") pod \"multus-additional-cni-plugins-jh827\" (UID: \"325c1728-1be4-421f-9dcb-514bea2da8b7\") " pod="openshift-multus/multus-additional-cni-plugins-jh827" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.684383 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/325c1728-1be4-421f-9dcb-514bea2da8b7-cni-binary-copy\") pod \"multus-additional-cni-plugins-jh827\" (UID: \"325c1728-1be4-421f-9dcb-514bea2da8b7\") " pod="openshift-multus/multus-additional-cni-plugins-jh827" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.684408 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-etc-openvswitch\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.684588 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-host-run-multus-certs\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.684653 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-run-netns\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.684696 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-log-socket\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.684732 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8a250015-5489-46f5-95c6-fc5d7f565bb8-host\") pod \"node-ca-4wch6\" (UID: \"8a250015-5489-46f5-95c6-fc5d7f565bb8\") " pod="openshift-image-registry/node-ca-4wch6" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.684731 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-etc-kubernetes\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.685131 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c9d08e95-6328-4e97-aab4-4dd9913914cc-cni-binary-copy\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.685330 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c9d08e95-6328-4e97-aab4-4dd9913914cc-multus-daemon-config\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.685468 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-host-run-k8s-cni-cncf-io\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.684651 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.683909 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.685239 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: E0220 00:10:02.685633 5107 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 00:10:02 crc kubenswrapper[5107]: E0220 00:10:02.686533 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cee716c2-1a9a-4944-9b9f-06284973b167-metrics-certs podName:cee716c2-1a9a-4944-9b9f-06284973b167 nodeName:}" failed. No retries permitted until 2026-02-20 00:10:03.186513158 +0000 UTC m=+89.555170804 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cee716c2-1a9a-4944-9b9f-06284973b167-metrics-certs") pod "network-metrics-daemon-j2l2p" (UID: "cee716c2-1a9a-4944-9b9f-06284973b167") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.685832 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-run-openvswitch\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.685856 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-systemd-units\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.685863 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.685769 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/cde7379e-ed60-484c-996d-71c37cce9fd0-tmp-dir\") pod \"node-resolver-kkvfh\" (UID: \"cde7379e-ed60-484c-996d-71c37cce9fd0\") " pod="openshift-dns/node-resolver-kkvfh" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.686614 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-env-overrides\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.685781 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-multus-socket-dir-parent\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.685886 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.686637 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-cni-netd\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.686666 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-kubelet\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.686695 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/325c1728-1be4-421f-9dcb-514bea2da8b7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jh827\" (UID: \"325c1728-1be4-421f-9dcb-514bea2da8b7\") " pod="openshift-multus/multus-additional-cni-plugins-jh827" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.686724 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ff511768-9c0a-4c27-a386-24c9cd8c4eac-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-bf9fp\" (UID: \"ff511768-9c0a-4c27-a386-24c9cd8c4eac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bf9fp" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.686753 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzt7b\" (UniqueName: \"kubernetes.io/projected/ff511768-9c0a-4c27-a386-24c9cd8c4eac-kube-api-access-pzt7b\") pod \"ovnkube-control-plane-57b78d8988-bf9fp\" (UID: \"ff511768-9c0a-4c27-a386-24c9cd8c4eac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bf9fp" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.686783 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.686811 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-ovnkube-config\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.686837 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qhdnk\" (UniqueName: \"kubernetes.io/projected/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-kube-api-access-qhdnk\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.686867 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2a8cc693-438e-4d3b-8865-7d3907f9dc78-mcd-auth-proxy-config\") pod \"machine-config-daemon-5bqkx\" (UID: \"2a8cc693-438e-4d3b-8865-7d3907f9dc78\") " pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.686893 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-cnibin\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.686921 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-host-var-lib-cni-bin\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.686945 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ff511768-9c0a-4c27-a386-24c9cd8c4eac-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-bf9fp\" (UID: \"ff511768-9c0a-4c27-a386-24c9cd8c4eac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bf9fp" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.686977 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-ovnkube-script-lib\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687003 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/cde7379e-ed60-484c-996d-71c37cce9fd0-hosts-file\") pod \"node-resolver-kkvfh\" (UID: \"cde7379e-ed60-484c-996d-71c37cce9fd0\") " pod="openshift-dns/node-resolver-kkvfh" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687029 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/325c1728-1be4-421f-9dcb-514bea2da8b7-os-release\") pod \"multus-additional-cni-plugins-jh827\" (UID: \"325c1728-1be4-421f-9dcb-514bea2da8b7\") " pod="openshift-multus/multus-additional-cni-plugins-jh827" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687180 5107 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687199 5107 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687213 5107 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687229 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687244 5107 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687256 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687269 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687283 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687296 5107 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687310 5107 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687323 5107 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687335 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687349 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687363 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687376 5107 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687389 5107 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687402 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687417 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687431 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687445 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687459 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687474 5107 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687486 5107 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687500 5107 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687515 5107 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687529 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687542 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687555 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687572 5107 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687585 5107 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687598 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687641 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687656 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687672 5107 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687685 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687675 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687697 5107 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687738 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687761 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/325c1728-1be4-421f-9dcb-514bea2da8b7-os-release\") pod \"multus-additional-cni-plugins-jh827\" (UID: \"325c1728-1be4-421f-9dcb-514bea2da8b7\") " pod="openshift-multus/multus-additional-cni-plugins-jh827" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687768 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.687794 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.686281 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.688254 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.688335 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/325c1728-1be4-421f-9dcb-514bea2da8b7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jh827\" (UID: \"325c1728-1be4-421f-9dcb-514bea2da8b7\") " pod="openshift-multus/multus-additional-cni-plugins-jh827" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.685831 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-multus-socket-dir-parent\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.686219 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-run-ovn\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.688385 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-kubelet\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.688385 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-ovnkube-config\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.688582 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-host-var-lib-cni-bin\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.688598 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c9d08e95-6328-4e97-aab4-4dd9913914cc-cnibin\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.689080 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-ovnkube-script-lib\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.689192 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/cde7379e-ed60-484c-996d-71c37cce9fd0-hosts-file\") pod \"node-resolver-kkvfh\" (UID: \"cde7379e-ed60-484c-996d-71c37cce9fd0\") " pod="openshift-dns/node-resolver-kkvfh" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.689271 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2a8cc693-438e-4d3b-8865-7d3907f9dc78-mcd-auth-proxy-config\") pod \"machine-config-daemon-5bqkx\" (UID: \"2a8cc693-438e-4d3b-8865-7d3907f9dc78\") " pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.689473 5107 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.689488 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.689500 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.689510 5107 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.689519 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.689529 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.689553 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.689708 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.689822 5107 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.689841 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.689854 5107 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.689865 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.689877 5107 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.689889 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.689905 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.689919 5107 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.689931 5107 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.689943 5107 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.689956 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.689969 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.689999 5107 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690013 5107 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690025 5107 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690036 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690049 5107 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690062 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690077 5107 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690088 5107 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690102 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690113 5107 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690124 5107 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690126 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2a8cc693-438e-4d3b-8865-7d3907f9dc78-proxy-tls\") pod \"machine-config-daemon-5bqkx\" (UID: \"2a8cc693-438e-4d3b-8865-7d3907f9dc78\") " pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690180 5107 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690195 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690206 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690217 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690229 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690241 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690253 5107 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690265 5107 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690276 5107 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690288 5107 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690300 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690311 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690323 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690328 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690334 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690358 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690371 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690381 5107 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690391 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690401 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690410 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690420 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690433 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690444 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690453 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690463 5107 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690472 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690482 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690493 5107 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690503 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690515 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690517 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690525 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690592 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690608 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690623 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690638 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690654 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690668 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690681 5107 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690694 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690706 5107 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690719 5107 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690732 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690749 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690761 5107 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690773 5107 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690786 5107 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690799 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690813 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690826 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690826 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-ovn-node-metrics-cert\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690841 5107 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690855 5107 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690872 5107 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690942 5107 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.690956 5107 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.691004 5107 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.691108 5107 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.691122 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.691136 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.691167 5107 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.691181 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.691194 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.691208 5107 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.691220 5107 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.691234 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.691247 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.691260 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.691249 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.691274 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.691355 5107 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.691357 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.692115 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.692134 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.692160 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.692174 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.692184 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:02Z","lastTransitionTime":"2026-02-20T00:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.698965 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.703136 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.704211 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.704368 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.704379 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.706134 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.706216 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffhj8\" (UniqueName: \"kubernetes.io/projected/325c1728-1be4-421f-9dcb-514bea2da8b7-kube-api-access-ffhj8\") pod \"multus-additional-cni-plugins-jh827\" (UID: \"325c1728-1be4-421f-9dcb-514bea2da8b7\") " pod="openshift-multus/multus-additional-cni-plugins-jh827" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.706288 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.706402 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.706450 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.706748 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.706939 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.707392 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.707425 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.707469 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.707669 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-snz9c\" (UniqueName: \"kubernetes.io/projected/c9d08e95-6328-4e97-aab4-4dd9913914cc-kube-api-access-snz9c\") pod \"multus-fnskd\" (UID: \"c9d08e95-6328-4e97-aab4-4dd9913914cc\") " pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.707841 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhm7m\" (UniqueName: \"kubernetes.io/projected/cee716c2-1a9a-4944-9b9f-06284973b167-kube-api-access-fhm7m\") pod \"network-metrics-daemon-j2l2p\" (UID: \"cee716c2-1a9a-4944-9b9f-06284973b167\") " pod="openshift-multus/network-metrics-daemon-j2l2p" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.708399 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhdnk\" (UniqueName: \"kubernetes.io/projected/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-kube-api-access-qhdnk\") pod \"ovnkube-node-glc89\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.709104 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.709369 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.709438 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmr8l\" (UniqueName: \"kubernetes.io/projected/8a250015-5489-46f5-95c6-fc5d7f565bb8-kube-api-access-vmr8l\") pod \"node-ca-4wch6\" (UID: \"8a250015-5489-46f5-95c6-fc5d7f565bb8\") " pod="openshift-image-registry/node-ca-4wch6" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.710792 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wnk8\" (UniqueName: \"kubernetes.io/projected/cde7379e-ed60-484c-996d-71c37cce9fd0-kube-api-access-5wnk8\") pod \"node-resolver-kkvfh\" (UID: \"cde7379e-ed60-484c-996d-71c37cce9fd0\") " pod="openshift-dns/node-resolver-kkvfh" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.711450 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.711581 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.711746 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.711924 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.712341 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.711363 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.712840 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.713061 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.713206 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.713356 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.713414 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.713771 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.713917 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.714177 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.714316 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.714315 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.714478 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.714812 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.714948 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.715284 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.715293 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.715432 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.715497 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.715967 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.716028 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdsm6\" (UniqueName: \"kubernetes.io/projected/2a8cc693-438e-4d3b-8865-7d3907f9dc78-kube-api-access-xdsm6\") pod \"machine-config-daemon-5bqkx\" (UID: \"2a8cc693-438e-4d3b-8865-7d3907f9dc78\") " pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.716061 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.716106 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.716118 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.716180 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.716297 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.716408 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.716485 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.716507 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.717011 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.717054 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.717520 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.720500 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.724252 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.731773 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.733584 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.741089 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-kkvfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cde7379e-ed60-484c-996d-71c37cce9fd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wnk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kkvfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.743157 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.745451 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.748034 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4wch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a250015-5489-46f5-95c6-fc5d7f565bb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmr8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:10:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4wch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.751850 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.754576 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be093228-0bf8-4de7-9845-5849d3f01dc9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ae855fa132589a0ff2cab5d759b987873a303aa4bb84df6a28b53fa6b464e49b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:08:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://00525d7855a25a393026add38fc9e33a1183e3d9dcd334a8be72ccf6b8b885d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00525d7855a25a393026add38fc9e33a1183e3d9dcd334a8be72ccf6b8b885d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-20T00:08:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-20T00:08:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:08:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.764634 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad017d0a-4200-4b1f-a03e-ae2b2b81ab8b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://7a92dad77499ef534f5d281fd3c3fc2e3d25902901a8f84b5a0a2ea296d9a91c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:08:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b2c0ff440f5c8cffa2641d2bbd39fe8e52017724608d93d4b248240be6a0af24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:08:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://21cc3154b74aece1eed247b85c80f9f2c7d77280e04711ce90bc778752a8738a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:08:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dedae9d10992c0717bf9a6a55742b97566a7e6ea9660a223cd9df127ca3dc627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedae9d10992c0717bf9a6a55742b97566a7e6ea9660a223cd9df127ca3dc627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-20T00:09:45Z\\\",\\\"message\\\":\\\"lse\\\\nI0220 00:09:45.270122 1 observer_polling.go:159] Starting file observer\\\\nW0220 00:09:45.282281 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0220 00:09:45.282391 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0220 00:09:45.283073 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1742683989/tls.crt::/tmp/serving-cert-1742683989/tls.key\\\\\\\"\\\\nI0220 00:09:45.826245 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0220 00:09:45.827577 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0220 00:09:45.827591 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0220 00:09:45.827609 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0220 00:09:45.827614 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0220 00:09:45.831521 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0220 00:09:45.831529 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0220 00:09:45.831572 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0220 00:09:45.831582 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0220 00:09:45.831591 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0220 00:09:45.831598 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0220 00:09:45.831603 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0220 00:09:45.831608 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0220 00:09:45.832937 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-20T00:09:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://750bdb9c855b0e7e5aab93bb7f41b27a004c22584c4a40c4bfa03679ec1a2533\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:08:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://923e75e5853dfb6b0bc6213ad0529fecf43cf5f9f3efceb7424aeeb52c870fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://923e75e5853dfb6b0bc6213ad0529fecf43cf5f9f3efceb7424aeeb52c870fae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-20T00:08:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-20T00:08:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:08:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.768521 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.772997 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a8cc693-438e-4d3b-8865-7d3907f9dc78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsm6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:10:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5bqkx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.781483 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-fnskd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9d08e95-6328-4e97-aab4-4dd9913914cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-snz9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:10:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fnskd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:02 crc kubenswrapper[5107]: W0220 00:10:02.784254 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34177974_8d82_49d2_a763_391d0df3bbd8.slice/crio-27e048bba560a2d0fb58c8958043f48a6d570e2989b7c098be6f1acc28b576c1 WatchSource:0}: Error finding container 27e048bba560a2d0fb58c8958043f48a6d570e2989b7c098be6f1acc28b576c1: Status 404 returned error can't find the container with id 27e048bba560a2d0fb58c8958043f48a6d570e2989b7c098be6f1acc28b576c1 Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.787400 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.791127 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1a12c6c-15bc-4e43-9e5e-5efda5718ceb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://58824b379b4ba7a83b26b170ff5f48aa0cbf08e2316033a29a3551130089e9a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:08:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://75a0b0bb620b8d44941a13deb15b5425f660fa29906845162437be55efde7325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:08:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://aae38182766654da3531d1b7fce65ce54baf0eb6cdef526e425fd10caafab1ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:08:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c002cafcf30b467c4db98b57fe8122464f6f6d556fd6ac134174ad43e52f7ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c002cafcf30b467c4db98b57fe8122464f6f6d556fd6ac134174ad43e52f7ad5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-20T00:08:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-20T00:08:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:08:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.792055 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ff511768-9c0a-4c27-a386-24c9cd8c4eac-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-bf9fp\" (UID: \"ff511768-9c0a-4c27-a386-24c9cd8c4eac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bf9fp" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.792118 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pzt7b\" (UniqueName: \"kubernetes.io/projected/ff511768-9c0a-4c27-a386-24c9cd8c4eac-kube-api-access-pzt7b\") pod \"ovnkube-control-plane-57b78d8988-bf9fp\" (UID: \"ff511768-9c0a-4c27-a386-24c9cd8c4eac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bf9fp" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.792205 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ff511768-9c0a-4c27-a386-24c9cd8c4eac-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-bf9fp\" (UID: \"ff511768-9c0a-4c27-a386-24c9cd8c4eac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bf9fp" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.792526 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ff511768-9c0a-4c27-a386-24c9cd8c4eac-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-bf9fp\" (UID: \"ff511768-9c0a-4c27-a386-24c9cd8c4eac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bf9fp" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.792883 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.793063 5107 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.793136 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.793246 5107 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.793314 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.793367 5107 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.793428 5107 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.793487 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.793556 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.793705 5107 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.793775 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.793835 5107 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.793913 5107 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.793974 5107 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794041 5107 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794097 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794173 5107 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.793843 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ff511768-9c0a-4c27-a386-24c9cd8c4eac-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-bf9fp\" (UID: \"ff511768-9c0a-4c27-a386-24c9cd8c4eac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bf9fp" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.793213 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ff511768-9c0a-4c27-a386-24c9cd8c4eac-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-bf9fp\" (UID: \"ff511768-9c0a-4c27-a386-24c9cd8c4eac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bf9fp" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794341 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794410 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794425 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794437 5107 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794450 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794463 5107 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794475 5107 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794487 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794502 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794515 5107 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794527 5107 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794539 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794550 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794561 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794572 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794583 5107 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794596 5107 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794607 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794617 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794627 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794638 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794649 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794660 5107 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794672 5107 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794683 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794698 5107 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794708 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794720 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794731 5107 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794741 5107 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794752 5107 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794763 5107 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794773 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794784 5107 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794795 5107 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794806 5107 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794818 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794829 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794841 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794852 5107 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794863 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794875 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794886 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794898 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794938 5107 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794952 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794964 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.794976 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.795009 5107 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.795022 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.795033 5107 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.795045 5107 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.795077 5107 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.795364 5107 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.795388 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.795400 5107 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.795411 5107 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.795423 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.795434 5107 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.795515 5107 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.795526 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.795538 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.795548 5107 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.795559 5107 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.795571 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.795583 5107 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.795595 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.795607 5107 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.797468 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ff511768-9c0a-4c27-a386-24c9cd8c4eac-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-bf9fp\" (UID: \"ff511768-9c0a-4c27-a386-24c9cd8c4eac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bf9fp" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.800536 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.800862 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.800774 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.801352 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.802310 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.802347 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:02Z","lastTransitionTime":"2026-02-20T00:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.802401 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.812153 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.813251 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-jh827" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.818471 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzt7b\" (UniqueName: \"kubernetes.io/projected/ff511768-9c0a-4c27-a386-24c9cd8c4eac-kube-api-access-pzt7b\") pod \"ovnkube-control-plane-57b78d8988-bf9fp\" (UID: \"ff511768-9c0a-4c27-a386-24c9cd8c4eac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bf9fp" Feb 20 00:10:02 crc kubenswrapper[5107]: W0220 00:10:02.822881 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428b39f5_eb1c_4f65_b7a4_eeb6e84860cc.slice/crio-37678811168ec43924322fe6c3bda00dd0412d7c18d4e7f58b02c96148210c88 WatchSource:0}: Error finding container 37678811168ec43924322fe6c3bda00dd0412d7c18d4e7f58b02c96148210c88: Status 404 returned error can't find the container with id 37678811168ec43924322fe6c3bda00dd0412d7c18d4e7f58b02c96148210c88 Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.823508 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.826929 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glc89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhdnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhdnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhdnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhdnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhdnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhdnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhdnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhdnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhdnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:10:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glc89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.829368 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fnskd" Feb 20 00:10:02 crc kubenswrapper[5107]: W0220 00:10:02.834735 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a8cc693_438e_4d3b_8865_7d3907f9dc78.slice/crio-2eded63afbf9d0c25649000db0482592314624cad2bdc0efb83508d12b40da3b WatchSource:0}: Error finding container 2eded63afbf9d0c25649000db0482592314624cad2bdc0efb83508d12b40da3b: Status 404 returned error can't find the container with id 2eded63afbf9d0c25649000db0482592314624cad2bdc0efb83508d12b40da3b Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.835920 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.842382 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-kkvfh" Feb 20 00:10:02 crc kubenswrapper[5107]: W0220 00:10:02.842797 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9d08e95_6328_4e97_aab4_4dd9913914cc.slice/crio-723d64569d5bf1e3a61f79bc4cb1a7d20439d74c020177cec1645df6989487fc WatchSource:0}: Error finding container 723d64569d5bf1e3a61f79bc4cb1a7d20439d74c020177cec1645df6989487fc: Status 404 returned error can't find the container with id 723d64569d5bf1e3a61f79bc4cb1a7d20439d74c020177cec1645df6989487fc Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.849727 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-4wch6" Feb 20 00:10:02 crc kubenswrapper[5107]: W0220 00:10:02.850198 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf6b562f_b4f7_400d_b6c2_cf5df40d6eaf.slice/crio-600f5d41bba18f5b995d26c293f990882bfa7d486b864f5613eff555c18a946d WatchSource:0}: Error finding container 600f5d41bba18f5b995d26c293f990882bfa7d486b864f5613eff555c18a946d: Status 404 returned error can't find the container with id 600f5d41bba18f5b995d26c293f990882bfa7d486b864f5613eff555c18a946d Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.854336 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bf9fp" Feb 20 00:10:02 crc kubenswrapper[5107]: W0220 00:10:02.855467 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcde7379e_ed60_484c_996d_71c37cce9fd0.slice/crio-93b167c90c2e2da7f4edaf3b18392af8809b9c8690c8e98c3f3e376cba5fe58b WatchSource:0}: Error finding container 93b167c90c2e2da7f4edaf3b18392af8809b9c8690c8e98c3f3e376cba5fe58b: Status 404 returned error can't find the container with id 93b167c90c2e2da7f4edaf3b18392af8809b9c8690c8e98c3f3e376cba5fe58b Feb 20 00:10:02 crc kubenswrapper[5107]: W0220 00:10:02.860540 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a250015_5489_46f5_95c6_fc5d7f565bb8.slice/crio-58d91ec0c438db83fd97762c0432fc1a532bdf50031f84eec19c6eee321cfb4c WatchSource:0}: Error finding container 58d91ec0c438db83fd97762c0432fc1a532bdf50031f84eec19c6eee321cfb4c: Status 404 returned error can't find the container with id 58d91ec0c438db83fd97762c0432fc1a532bdf50031f84eec19c6eee321cfb4c Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.860730 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" event={"ID":"2a8cc693-438e-4d3b-8865-7d3907f9dc78","Type":"ContainerStarted","Data":"2eded63afbf9d0c25649000db0482592314624cad2bdc0efb83508d12b40da3b"} Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.862948 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jh827" event={"ID":"325c1728-1be4-421f-9dcb-514bea2da8b7","Type":"ContainerStarted","Data":"9d316249ca8e4db4a49ea253754d2165da8c5a59cf7ef489c1d2d87dab7b6cc8"} Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.863636 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"3dd9349c59e85b79dad416038eaa03c0dacc450eedc861e23d1c79b553493eaa"} Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.865867 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glc89" event={"ID":"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf","Type":"ContainerStarted","Data":"600f5d41bba18f5b995d26c293f990882bfa7d486b864f5613eff555c18a946d"} Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.866807 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fnskd" event={"ID":"c9d08e95-6328-4e97-aab4-4dd9913914cc","Type":"ContainerStarted","Data":"723d64569d5bf1e3a61f79bc4cb1a7d20439d74c020177cec1645df6989487fc"} Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.867461 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"37678811168ec43924322fe6c3bda00dd0412d7c18d4e7f58b02c96148210c88"} Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.868878 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"27e048bba560a2d0fb58c8958043f48a6d570e2989b7c098be6f1acc28b576c1"} Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.869561 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-kkvfh" event={"ID":"cde7379e-ed60-484c-996d-71c37cce9fd0","Type":"ContainerStarted","Data":"93b167c90c2e2da7f4edaf3b18392af8809b9c8690c8e98c3f3e376cba5fe58b"} Feb 20 00:10:02 crc kubenswrapper[5107]: W0220 00:10:02.884030 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff511768_9c0a_4c27_a386_24c9cd8c4eac.slice/crio-2ce5e4357c6925a1748bb64dbd95e0e72c4edec329385cfe62d873a2808c712e WatchSource:0}: Error finding container 2ce5e4357c6925a1748bb64dbd95e0e72c4edec329385cfe62d873a2808c712e: Status 404 returned error can't find the container with id 2ce5e4357c6925a1748bb64dbd95e0e72c4edec329385cfe62d873a2808c712e Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.904309 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.904341 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.904349 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.904361 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:02 crc kubenswrapper[5107]: I0220 00:10:02.904372 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:02Z","lastTransitionTime":"2026-02-20T00:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.008855 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.009120 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.009129 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.009157 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.009166 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:03Z","lastTransitionTime":"2026-02-20T00:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.098625 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.098674 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.098702 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.098739 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 20 00:10:03 crc kubenswrapper[5107]: E0220 00:10:03.098952 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 20 00:10:03 crc kubenswrapper[5107]: E0220 00:10:03.098975 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 20 00:10:03 crc kubenswrapper[5107]: E0220 00:10:03.098987 5107 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:10:03 crc kubenswrapper[5107]: E0220 00:10:03.099033 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-20 00:10:04.099019222 +0000 UTC m=+90.467676788 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:10:03 crc kubenswrapper[5107]: E0220 00:10:03.099310 5107 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 20 00:10:03 crc kubenswrapper[5107]: E0220 00:10:03.099341 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-20 00:10:04.09933382 +0000 UTC m=+90.467991386 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 20 00:10:03 crc kubenswrapper[5107]: E0220 00:10:03.099406 5107 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 20 00:10:03 crc kubenswrapper[5107]: E0220 00:10:03.099427 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-20 00:10:04.099421553 +0000 UTC m=+90.468079119 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 20 00:10:03 crc kubenswrapper[5107]: E0220 00:10:03.099463 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 20 00:10:03 crc kubenswrapper[5107]: E0220 00:10:03.099471 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 20 00:10:03 crc kubenswrapper[5107]: E0220 00:10:03.099478 5107 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:10:03 crc kubenswrapper[5107]: E0220 00:10:03.099497 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-20 00:10:04.099491345 +0000 UTC m=+90.468148911 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.119572 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.119602 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.119623 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.119637 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.119646 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:03Z","lastTransitionTime":"2026-02-20T00:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.199193 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.199335 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cee716c2-1a9a-4944-9b9f-06284973b167-metrics-certs\") pod \"network-metrics-daemon-j2l2p\" (UID: \"cee716c2-1a9a-4944-9b9f-06284973b167\") " pod="openshift-multus/network-metrics-daemon-j2l2p" Feb 20 00:10:03 crc kubenswrapper[5107]: E0220 00:10:03.199421 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:04.199398241 +0000 UTC m=+90.568055807 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:03 crc kubenswrapper[5107]: E0220 00:10:03.199484 5107 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 00:10:03 crc kubenswrapper[5107]: E0220 00:10:03.199653 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cee716c2-1a9a-4944-9b9f-06284973b167-metrics-certs podName:cee716c2-1a9a-4944-9b9f-06284973b167 nodeName:}" failed. No retries permitted until 2026-02-20 00:10:04.199644258 +0000 UTC m=+90.568301824 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cee716c2-1a9a-4944-9b9f-06284973b167-metrics-certs") pod "network-metrics-daemon-j2l2p" (UID: "cee716c2-1a9a-4944-9b9f-06284973b167") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.220655 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.220704 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.220717 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.220734 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.220745 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:03Z","lastTransitionTime":"2026-02-20T00:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.322868 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.322911 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.322941 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.322962 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.322978 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:03Z","lastTransitionTime":"2026-02-20T00:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.425255 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.425293 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.425301 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.425315 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.425325 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:03Z","lastTransitionTime":"2026-02-20T00:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.485243 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 20 00:10:03 crc kubenswrapper[5107]: E0220 00:10:03.485369 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.526996 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.527037 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.527048 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.527065 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.527078 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:03Z","lastTransitionTime":"2026-02-20T00:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.628776 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.628817 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.628832 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.628849 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.628859 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:03Z","lastTransitionTime":"2026-02-20T00:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.731279 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.731334 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.731347 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.731367 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.731382 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:03Z","lastTransitionTime":"2026-02-20T00:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.834122 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.834228 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.834248 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.834275 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.834294 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:03Z","lastTransitionTime":"2026-02-20T00:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.881860 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-kkvfh" event={"ID":"cde7379e-ed60-484c-996d-71c37cce9fd0","Type":"ContainerStarted","Data":"a2ac983d11a2b54d3653361c0f94fc3acaf8b033596b6606daa26074a8cb2025"} Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.885317 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" event={"ID":"2a8cc693-438e-4d3b-8865-7d3907f9dc78","Type":"ContainerStarted","Data":"8f812eb41cec685d2f6437c07a10e4d7996b88945ebf11cbc56e8fae73c53a73"} Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.885378 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" event={"ID":"2a8cc693-438e-4d3b-8865-7d3907f9dc78","Type":"ContainerStarted","Data":"a2b0d29d657a8e1e523026507a935569b2cff249c2e3d5743b380396be4cd1c2"} Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.889539 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-4wch6" event={"ID":"8a250015-5489-46f5-95c6-fc5d7f565bb8","Type":"ContainerStarted","Data":"f65d147bbcd71e922902ab90ca74f06dc27ffd782d1c83aa1e30c4894d7fcb0d"} Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.889582 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-4wch6" event={"ID":"8a250015-5489-46f5-95c6-fc5d7f565bb8","Type":"ContainerStarted","Data":"58d91ec0c438db83fd97762c0432fc1a532bdf50031f84eec19c6eee321cfb4c"} Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.892696 5107 generic.go:358] "Generic (PLEG): container finished" podID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerID="9968bd56bbece179916db8507f600f115c6f287694215d673c8f6a2b4dbc2d93" exitCode=0 Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.892870 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glc89" event={"ID":"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf","Type":"ContainerDied","Data":"9968bd56bbece179916db8507f600f115c6f287694215d673c8f6a2b4dbc2d93"} Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.894878 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fnskd" event={"ID":"c9d08e95-6328-4e97-aab4-4dd9913914cc","Type":"ContainerStarted","Data":"11d929442d04c0332dcbbdccbd6f701427a86529b8e63352dc475bc4e2927653"} Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.900470 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bf9fp" event={"ID":"ff511768-9c0a-4c27-a386-24c9cd8c4eac","Type":"ContainerStarted","Data":"0dd3a7fc07e23f34d434794c443bd144b4c50c362794b29e34d8de15180c4fb0"} Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.900515 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bf9fp" event={"ID":"ff511768-9c0a-4c27-a386-24c9cd8c4eac","Type":"ContainerStarted","Data":"6a6c2f36d65986162afcce08e460081a513c8f2aef40f460917dec3baefb8189"} Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.900576 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bf9fp" event={"ID":"ff511768-9c0a-4c27-a386-24c9cd8c4eac","Type":"ContainerStarted","Data":"2ce5e4357c6925a1748bb64dbd95e0e72c4edec329385cfe62d873a2808c712e"} Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.901546 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1a12c6c-15bc-4e43-9e5e-5efda5718ceb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://58824b379b4ba7a83b26b170ff5f48aa0cbf08e2316033a29a3551130089e9a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:08:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://75a0b0bb620b8d44941a13deb15b5425f660fa29906845162437be55efde7325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:08:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://aae38182766654da3531d1b7fce65ce54baf0eb6cdef526e425fd10caafab1ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:08:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c002cafcf30b467c4db98b57fe8122464f6f6d556fd6ac134174ad43e52f7ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c002cafcf30b467c4db98b57fe8122464f6f6d556fd6ac134174ad43e52f7ad5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-20T00:08:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-20T00:08:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:08:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.903161 5107 generic.go:358] "Generic (PLEG): container finished" podID="325c1728-1be4-421f-9dcb-514bea2da8b7" containerID="fd6e80e626cdc82d03d3d991c2c4c0a7f2e01654baa8f006cb9264956a58eb6b" exitCode=0 Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.903268 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jh827" event={"ID":"325c1728-1be4-421f-9dcb-514bea2da8b7","Type":"ContainerDied","Data":"fd6e80e626cdc82d03d3d991c2c4c0a7f2e01654baa8f006cb9264956a58eb6b"} Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.907086 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"3b55b97eb64bbdfc9824bf336bb0be1d3d92b034517618f1916e6919ecc62df8"} Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.907159 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"77860d966b1234857d4ebb26b90ee83e0e84b6437a81b87a55efcd787f54a236"} Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.909539 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"d306cc969c3c1c479b2a99784f2563111c3752539ea1e99cf48150bb9e6dfb17"} Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.917841 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.932946 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.936387 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.936429 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.936440 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.936458 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.936470 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:03Z","lastTransitionTime":"2026-02-20T00:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.939445 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.939791 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.939809 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.939832 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.939849 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:03Z","lastTransitionTime":"2026-02-20T00:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:03 crc kubenswrapper[5107]: E0220 00:10:03.957436 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:10:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:10:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:10:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:10:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cb8ca53b-411e-4259-ae0d-d078aa1f4c50\\\",\\\"systemUUID\\\":\\\"3738b857-e068-44b2-8a5a-d59e1fffbda6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.959534 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glc89" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhdnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhdnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhdnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhdnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhdnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhdnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhdnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhdnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhdnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:10:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glc89\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.961819 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.961880 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.961899 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.961921 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.961936 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:03Z","lastTransitionTime":"2026-02-20T00:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:03 crc kubenswrapper[5107]: E0220 00:10:03.971812 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:10:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:10:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:10:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:10:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cb8ca53b-411e-4259-ae0d-d078aa1f4c50\\\",\\\"systemUUID\\\":\\\"3738b857-e068-44b2-8a5a-d59e1fffbda6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.972959 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b227e532-3580-458d-a999-376346b587d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ffb40cfb5387637dd74538e38d6fb34e7cc8da65f8ad2eff1895f6909ca0c654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:08:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://016de0626e1bb48e4a214e94d6f0fbe89c072d510e904bc496f85ef33fa1ccbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:08:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1346c22559f69486b1be71c79b1b5d84a95c66686bd7804f6040906fd83e3d99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:08:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e73f5bd7fd072c39cd086b8592d379aa14a144a641a7a82a2c04d566c4c7010f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:08:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:08:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.974858 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.974910 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.974924 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.974947 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.974962 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:03Z","lastTransitionTime":"2026-02-20T00:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.984931 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jh827" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325c1728-1be4-421f-9dcb-514bea2da8b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:10:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jh827\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:03 crc kubenswrapper[5107]: E0220 00:10:03.984854 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:10:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:10:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:10:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:10:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cb8ca53b-411e-4259-ae0d-d078aa1f4c50\\\",\\\"systemUUID\\\":\\\"3738b857-e068-44b2-8a5a-d59e1fffbda6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.988234 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.988270 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.988280 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.988294 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.988303 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:03Z","lastTransitionTime":"2026-02-20T00:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:03 crc kubenswrapper[5107]: I0220 00:10:03.994485 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bf9fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff511768-9c0a-4c27-a386-24c9cd8c4eac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzt7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzt7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:10:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-bf9fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:03 crc kubenswrapper[5107]: E0220 00:10:03.996912 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:10:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:10:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:10:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:10:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cb8ca53b-411e-4259-ae0d-d078aa1f4c50\\\",\\\"systemUUID\\\":\\\"3738b857-e068-44b2-8a5a-d59e1fffbda6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.000885 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.000943 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.000956 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.000976 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.000989 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:04Z","lastTransitionTime":"2026-02-20T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.001365 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j2l2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cee716c2-1a9a-4944-9b9f-06284973b167\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:10:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j2l2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:04 crc kubenswrapper[5107]: E0220 00:10:04.011134 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cb8ca53b-411e-4259-ae0d-d078aa1f4c50\\\",\\\"systemUUID\\\":\\\"3738b857-e068-44b2-8a5a-d59e1fffbda6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:04 crc kubenswrapper[5107]: E0220 00:10:04.011372 5107 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.022581 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"734f5043-9693-4cf9-82ef-85aaa490c018\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ab97dc99887641495dad3cb72be9ca892429d9f760851d511ba737c10aa533c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://14649d9c6bacb041e78519e3f952045ad2764afa541e79ac4ebbba3f56a9fe60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c7191b4213be9fa380280dfde810cd85aad6b56fefb3983b43b1e50eca05dfa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d6485423a2d33593eca47b7ecb17b0fd5cc6b1952a7ed37022254df9c794a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd2b23ba1c015bb90e717507a601c198e1441b517ca59751798a2438d6162355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5d8e833acadd77049f21cdf569b596c3527f1a08d0fc3f94ffc6e0246d99b234\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8e833acadd77049f21cdf569b596c3527f1a08d0fc3f94ffc6e0246d99b234\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-20T00:08:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-20T00:08:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://1554df061f39fc77ea8d400750c1c8dd190bd9c2803471eba36e8d53b96965d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1554df061f39fc77ea8d400750c1c8dd190bd9c2803471eba36e8d53b96965d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-20T00:08:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-20T00:08:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://3a01a0e88db61e63096ae1b23bc61fb503908fc79092380bea076ede86f03f5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a01a0e88db61e63096ae1b23bc61fb503908fc79092380bea076ede86f03f5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-20T00:08:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-20T00:08:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:08:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.034517 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.038402 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.038453 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.038467 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.038485 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.038496 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:04Z","lastTransitionTime":"2026-02-20T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.043558 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.054121 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.063235 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.070333 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-kkvfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cde7379e-ed60-484c-996d-71c37cce9fd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://a2ac983d11a2b54d3653361c0f94fc3acaf8b033596b6606daa26074a8cb2025\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:10:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wnk8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:10:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kkvfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.076865 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4wch6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a250015-5489-46f5-95c6-fc5d7f565bb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmr8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:10:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4wch6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.084373 5107 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be093228-0bf8-4de7-9845-5849d3f01dc9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-20T00:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ae855fa132589a0ff2cab5d759b987873a303aa4bb84df6a28b53fa6b464e49b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-20T00:08:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://00525d7855a25a393026add38fc9e33a1183e3d9dcd334a8be72ccf6b8b885d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00525d7855a25a393026add38fc9e33a1183e3d9dcd334a8be72ccf6b8b885d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-20T00:08:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-20T00:08:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-20T00:08:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.112423 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.114097 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.114176 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.114199 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 20 00:10:04 crc kubenswrapper[5107]: E0220 00:10:04.114330 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 20 00:10:04 crc kubenswrapper[5107]: E0220 00:10:04.114344 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 20 00:10:04 crc kubenswrapper[5107]: E0220 00:10:04.114353 5107 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:10:04 crc kubenswrapper[5107]: E0220 00:10:04.114389 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-20 00:10:06.11437613 +0000 UTC m=+92.483033696 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:10:04 crc kubenswrapper[5107]: E0220 00:10:04.114391 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 20 00:10:04 crc kubenswrapper[5107]: E0220 00:10:04.114425 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 20 00:10:04 crc kubenswrapper[5107]: E0220 00:10:04.114440 5107 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:10:04 crc kubenswrapper[5107]: E0220 00:10:04.114509 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-20 00:10:06.114488833 +0000 UTC m=+92.483146459 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:10:04 crc kubenswrapper[5107]: E0220 00:10:04.114581 5107 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 20 00:10:04 crc kubenswrapper[5107]: E0220 00:10:04.114596 5107 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 20 00:10:04 crc kubenswrapper[5107]: E0220 00:10:04.114617 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-20 00:10:06.114606916 +0000 UTC m=+92.483264562 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 20 00:10:04 crc kubenswrapper[5107]: E0220 00:10:04.114636 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-20 00:10:06.114626767 +0000 UTC m=+92.483284453 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.147405 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.147692 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.147726 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.147747 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.147765 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:04Z","lastTransitionTime":"2026-02-20T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.189671 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=2.189657522 podStartE2EDuration="2.189657522s" podCreationTimestamp="2026-02-20 00:10:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:04.189108466 +0000 UTC m=+90.557766042" watchObservedRunningTime="2026-02-20 00:10:04.189657522 +0000 UTC m=+90.558315088" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.215480 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.215575 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cee716c2-1a9a-4944-9b9f-06284973b167-metrics-certs\") pod \"network-metrics-daemon-j2l2p\" (UID: \"cee716c2-1a9a-4944-9b9f-06284973b167\") " pod="openshift-multus/network-metrics-daemon-j2l2p" Feb 20 00:10:04 crc kubenswrapper[5107]: E0220 00:10:04.215670 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:06.215640174 +0000 UTC m=+92.584297750 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:04 crc kubenswrapper[5107]: E0220 00:10:04.215681 5107 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 00:10:04 crc kubenswrapper[5107]: E0220 00:10:04.215739 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cee716c2-1a9a-4944-9b9f-06284973b167-metrics-certs podName:cee716c2-1a9a-4944-9b9f-06284973b167 nodeName:}" failed. No retries permitted until 2026-02-20 00:10:06.215724976 +0000 UTC m=+92.584382542 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cee716c2-1a9a-4944-9b9f-06284973b167-metrics-certs") pod "network-metrics-daemon-j2l2p" (UID: "cee716c2-1a9a-4944-9b9f-06284973b167") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.251440 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.251491 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.251516 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.251535 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.251549 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:04Z","lastTransitionTime":"2026-02-20T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.276317 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-kkvfh" podStartSLOduration=68.27629657 podStartE2EDuration="1m8.27629657s" podCreationTimestamp="2026-02-20 00:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:04.266463486 +0000 UTC m=+90.635121072" watchObservedRunningTime="2026-02-20 00:10:04.27629657 +0000 UTC m=+90.644954146" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.288081 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-4wch6" podStartSLOduration=67.288065507 podStartE2EDuration="1m7.288065507s" podCreationTimestamp="2026-02-20 00:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:04.276693301 +0000 UTC m=+90.645350877" watchObservedRunningTime="2026-02-20 00:10:04.288065507 +0000 UTC m=+90.656723093" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.302700 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=2.302685473 podStartE2EDuration="2.302685473s" podCreationTimestamp="2026-02-20 00:10:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:04.28854566 +0000 UTC m=+90.657203226" watchObservedRunningTime="2026-02-20 00:10:04.302685473 +0000 UTC m=+90.671343039" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.319889 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podStartSLOduration=68.319873031 podStartE2EDuration="1m8.319873031s" podCreationTimestamp="2026-02-20 00:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:04.302673623 +0000 UTC m=+90.671331199" watchObservedRunningTime="2026-02-20 00:10:04.319873031 +0000 UTC m=+90.688530597" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.320020 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-fnskd" podStartSLOduration=68.320015645 podStartE2EDuration="1m8.320015645s" podCreationTimestamp="2026-02-20 00:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:04.319518861 +0000 UTC m=+90.688176427" watchObservedRunningTime="2026-02-20 00:10:04.320015645 +0000 UTC m=+90.688673221" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.334051 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=2.334026594 podStartE2EDuration="2.334026594s" podCreationTimestamp="2026-02-20 00:10:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:04.33279418 +0000 UTC m=+90.701451736" watchObservedRunningTime="2026-02-20 00:10:04.334026594 +0000 UTC m=+90.702684160" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.353668 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.353703 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.353711 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.353724 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.353733 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:04Z","lastTransitionTime":"2026-02-20T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.397990 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=2.3979717210000002 podStartE2EDuration="2.397971721s" podCreationTimestamp="2026-02-20 00:10:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:04.397681283 +0000 UTC m=+90.766338849" watchObservedRunningTime="2026-02-20 00:10:04.397971721 +0000 UTC m=+90.766629297" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.429389 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bf9fp" podStartSLOduration=67.429369104 podStartE2EDuration="1m7.429369104s" podCreationTimestamp="2026-02-20 00:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:04.428219282 +0000 UTC m=+90.796876888" watchObservedRunningTime="2026-02-20 00:10:04.429369104 +0000 UTC m=+90.798026710" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.455636 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.455674 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.455684 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.455697 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.455707 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:04Z","lastTransitionTime":"2026-02-20T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.487080 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 20 00:10:04 crc kubenswrapper[5107]: E0220 00:10:04.487191 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.487293 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j2l2p" Feb 20 00:10:04 crc kubenswrapper[5107]: E0220 00:10:04.487433 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j2l2p" podUID="cee716c2-1a9a-4944-9b9f-06284973b167" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.487485 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:10:04 crc kubenswrapper[5107]: E0220 00:10:04.487554 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.490014 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.491044 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.492857 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.497041 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.503775 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.515755 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.516982 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.518591 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.519371 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.523323 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.525968 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.529117 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.530330 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.533361 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.534059 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.542174 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.543150 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.545820 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.547440 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.549964 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.552946 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.557359 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.557400 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.557410 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.557425 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.557435 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:04Z","lastTransitionTime":"2026-02-20T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.558637 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.559879 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.562590 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.563520 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.570861 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.573565 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.574409 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.579781 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.580750 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.585388 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.587878 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.597361 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.599408 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.600381 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.600975 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.602785 5107 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.602911 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.608028 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.611382 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.614103 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.616995 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.617552 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.620182 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.621123 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.622317 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.624806 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.627887 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.631350 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.632035 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.633924 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.638342 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.639013 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.640287 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.649436 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.650354 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.652188 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.653991 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.658994 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.659059 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.659069 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.659082 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.659091 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:04Z","lastTransitionTime":"2026-02-20T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.761190 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.761241 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.761253 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.761272 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.761286 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:04Z","lastTransitionTime":"2026-02-20T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.863280 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.863318 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.863330 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.863346 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.863359 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:04Z","lastTransitionTime":"2026-02-20T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.916210 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glc89" event={"ID":"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf","Type":"ContainerStarted","Data":"e4c55497b2c440e7767f3c6602b03f849de2496fa8c6f6649136e91fbadb1b39"} Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.916464 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glc89" event={"ID":"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf","Type":"ContainerStarted","Data":"9cd5146ae94c10b4e5bfdd7e8e3bf42cc6d407c2358db371e6e746bbdde1cc93"} Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.916478 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glc89" event={"ID":"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf","Type":"ContainerStarted","Data":"5fb6012486258bbd77a807a27c4c7bb0f8eef75d373a44256da52f1156c64c1d"} Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.916489 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glc89" event={"ID":"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf","Type":"ContainerStarted","Data":"8d37144fd243566e06cd611899402a4dd9aafa7b9bad63dc2877f81d26f164e4"} Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.916502 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glc89" event={"ID":"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf","Type":"ContainerStarted","Data":"877ba0c2d848e84f3ff10b987145674d55fe60ec255efc10f605f24b98963c96"} Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.916513 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glc89" event={"ID":"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf","Type":"ContainerStarted","Data":"a5acd80a95a9f26ca02364882aa271205aa59d202f017a9f2b3e2f03265f438c"} Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.920002 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jh827" event={"ID":"325c1728-1be4-421f-9dcb-514bea2da8b7","Type":"ContainerStarted","Data":"c884200eab67e4d9dbe14fe054c56e3f369622be6e7e892c9f314997416619de"} Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.965802 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.965832 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.965843 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.965856 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:04 crc kubenswrapper[5107]: I0220 00:10:04.965864 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:04Z","lastTransitionTime":"2026-02-20T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.067505 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.067544 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.067554 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.067569 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.067580 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:05Z","lastTransitionTime":"2026-02-20T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.170069 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.170194 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.170220 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.170246 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.170264 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:05Z","lastTransitionTime":"2026-02-20T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.272627 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.273101 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.273122 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.273181 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.273199 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:05Z","lastTransitionTime":"2026-02-20T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.375251 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.375535 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.375664 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.375760 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.375839 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:05Z","lastTransitionTime":"2026-02-20T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.478568 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.478853 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.478958 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.479057 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.479154 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:05Z","lastTransitionTime":"2026-02-20T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.485015 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 20 00:10:05 crc kubenswrapper[5107]: E0220 00:10:05.485120 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.580679 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.580746 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.580779 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.580803 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.580822 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:05Z","lastTransitionTime":"2026-02-20T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.682980 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.683017 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.683029 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.683044 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.683055 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:05Z","lastTransitionTime":"2026-02-20T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.785586 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.785621 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.785630 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.785642 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.785651 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:05Z","lastTransitionTime":"2026-02-20T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.887446 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.887486 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.887497 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.887516 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.887528 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:05Z","lastTransitionTime":"2026-02-20T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.923787 5107 generic.go:358] "Generic (PLEG): container finished" podID="325c1728-1be4-421f-9dcb-514bea2da8b7" containerID="c884200eab67e4d9dbe14fe054c56e3f369622be6e7e892c9f314997416619de" exitCode=0 Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.923846 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jh827" event={"ID":"325c1728-1be4-421f-9dcb-514bea2da8b7","Type":"ContainerDied","Data":"c884200eab67e4d9dbe14fe054c56e3f369622be6e7e892c9f314997416619de"} Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.989673 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.989719 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.989732 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.989747 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:05 crc kubenswrapper[5107]: I0220 00:10:05.989758 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:05Z","lastTransitionTime":"2026-02-20T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.094793 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.094831 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.094841 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.094853 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.094865 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:06Z","lastTransitionTime":"2026-02-20T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.134944 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.134984 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.135004 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.135030 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 20 00:10:06 crc kubenswrapper[5107]: E0220 00:10:06.135159 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 20 00:10:06 crc kubenswrapper[5107]: E0220 00:10:06.135177 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 20 00:10:06 crc kubenswrapper[5107]: E0220 00:10:06.135187 5107 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:10:06 crc kubenswrapper[5107]: E0220 00:10:06.135219 5107 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 20 00:10:06 crc kubenswrapper[5107]: E0220 00:10:06.135251 5107 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 20 00:10:06 crc kubenswrapper[5107]: E0220 00:10:06.135235 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-20 00:10:10.13522199 +0000 UTC m=+96.503879556 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:10:06 crc kubenswrapper[5107]: E0220 00:10:06.136349 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 20 00:10:06 crc kubenswrapper[5107]: E0220 00:10:06.136371 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 20 00:10:06 crc kubenswrapper[5107]: E0220 00:10:06.136375 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-20 00:10:10.135308143 +0000 UTC m=+96.503965739 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 20 00:10:06 crc kubenswrapper[5107]: E0220 00:10:06.136413 5107 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:10:06 crc kubenswrapper[5107]: E0220 00:10:06.136433 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-20 00:10:10.136414553 +0000 UTC m=+96.505072119 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 20 00:10:06 crc kubenswrapper[5107]: E0220 00:10:06.136449 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-20 00:10:10.136441864 +0000 UTC m=+96.505099430 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.197592 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.198052 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.198066 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.198083 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.198094 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:06Z","lastTransitionTime":"2026-02-20T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.235421 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.235515 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cee716c2-1a9a-4944-9b9f-06284973b167-metrics-certs\") pod \"network-metrics-daemon-j2l2p\" (UID: \"cee716c2-1a9a-4944-9b9f-06284973b167\") " pod="openshift-multus/network-metrics-daemon-j2l2p" Feb 20 00:10:06 crc kubenswrapper[5107]: E0220 00:10:06.235605 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:10.235556138 +0000 UTC m=+96.604213704 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:06 crc kubenswrapper[5107]: E0220 00:10:06.235609 5107 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 00:10:06 crc kubenswrapper[5107]: E0220 00:10:06.235716 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cee716c2-1a9a-4944-9b9f-06284973b167-metrics-certs podName:cee716c2-1a9a-4944-9b9f-06284973b167 nodeName:}" failed. No retries permitted until 2026-02-20 00:10:10.235702213 +0000 UTC m=+96.604359769 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cee716c2-1a9a-4944-9b9f-06284973b167-metrics-certs") pod "network-metrics-daemon-j2l2p" (UID: "cee716c2-1a9a-4944-9b9f-06284973b167") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.300043 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.300083 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.300093 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.300107 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.300118 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:06Z","lastTransitionTime":"2026-02-20T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.402342 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.402398 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.402411 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.402427 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.402441 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:06Z","lastTransitionTime":"2026-02-20T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.485712 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 20 00:10:06 crc kubenswrapper[5107]: E0220 00:10:06.485951 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.485998 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.486028 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j2l2p" Feb 20 00:10:06 crc kubenswrapper[5107]: E0220 00:10:06.486218 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 20 00:10:06 crc kubenswrapper[5107]: E0220 00:10:06.486395 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j2l2p" podUID="cee716c2-1a9a-4944-9b9f-06284973b167" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.505559 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.505618 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.505638 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.505663 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.505702 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:06Z","lastTransitionTime":"2026-02-20T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.609380 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.609447 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.609463 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.609486 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.609501 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:06Z","lastTransitionTime":"2026-02-20T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.713915 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.714008 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.714035 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.714069 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.714093 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:06Z","lastTransitionTime":"2026-02-20T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.817910 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.817954 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.817965 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.817985 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.817999 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:06Z","lastTransitionTime":"2026-02-20T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.920563 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.920633 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.920643 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.920659 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.920670 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:06Z","lastTransitionTime":"2026-02-20T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.933549 5107 generic.go:358] "Generic (PLEG): container finished" podID="325c1728-1be4-421f-9dcb-514bea2da8b7" containerID="ead1ab5b8ef2fdab262cbc33573c7c82c7ffc7de251595899774df550827459a" exitCode=0 Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.933612 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jh827" event={"ID":"325c1728-1be4-421f-9dcb-514bea2da8b7","Type":"ContainerDied","Data":"ead1ab5b8ef2fdab262cbc33573c7c82c7ffc7de251595899774df550827459a"} Feb 20 00:10:06 crc kubenswrapper[5107]: I0220 00:10:06.935881 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"fa66638556f06612b6b32281b38b65ece4543c329fe2036c15694c2bf18954f8"} Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.022708 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.023078 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.023095 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.023116 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.023129 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:07Z","lastTransitionTime":"2026-02-20T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.125629 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.125683 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.125699 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.125719 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.125734 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:07Z","lastTransitionTime":"2026-02-20T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.229462 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.229559 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.229586 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.229619 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.229644 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:07Z","lastTransitionTime":"2026-02-20T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.332133 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.332188 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.332199 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.332211 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.332220 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:07Z","lastTransitionTime":"2026-02-20T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.435590 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.435665 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.435690 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.435728 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.435756 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:07Z","lastTransitionTime":"2026-02-20T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.485618 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 20 00:10:07 crc kubenswrapper[5107]: E0220 00:10:07.485838 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.539219 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.539300 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.539321 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.539349 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.539374 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:07Z","lastTransitionTime":"2026-02-20T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.642666 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.642739 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.642758 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.642784 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.642807 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:07Z","lastTransitionTime":"2026-02-20T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.745121 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.745176 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.745188 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.745205 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.745214 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:07Z","lastTransitionTime":"2026-02-20T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.848537 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.848606 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.848627 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.848653 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.848672 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:07Z","lastTransitionTime":"2026-02-20T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.944001 5107 generic.go:358] "Generic (PLEG): container finished" podID="325c1728-1be4-421f-9dcb-514bea2da8b7" containerID="ce9202277345cbb246d04f19431f2cef607d2eec54c3157cdeef2274c3c6e208" exitCode=0 Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.944057 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jh827" event={"ID":"325c1728-1be4-421f-9dcb-514bea2da8b7","Type":"ContainerDied","Data":"ce9202277345cbb246d04f19431f2cef607d2eec54c3157cdeef2274c3c6e208"} Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.948879 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glc89" event={"ID":"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf","Type":"ContainerStarted","Data":"683f7d09537d646287589f9dd2c0eedb6307891e7d1e994a18977f335ed81f35"} Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.950332 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.950390 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.950408 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.950434 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:07 crc kubenswrapper[5107]: I0220 00:10:07.950452 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:07Z","lastTransitionTime":"2026-02-20T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.052427 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.052485 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.052503 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.052528 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.052545 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:08Z","lastTransitionTime":"2026-02-20T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.154848 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.154889 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.154898 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.154913 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.154923 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:08Z","lastTransitionTime":"2026-02-20T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.256459 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.256542 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.256568 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.256598 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.256623 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:08Z","lastTransitionTime":"2026-02-20T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.360048 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.360125 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.360199 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.360235 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.360259 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:08Z","lastTransitionTime":"2026-02-20T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.463330 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.463400 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.463425 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.463454 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.463480 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:08Z","lastTransitionTime":"2026-02-20T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.485942 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.486109 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j2l2p" Feb 20 00:10:08 crc kubenswrapper[5107]: E0220 00:10:08.486117 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 20 00:10:08 crc kubenswrapper[5107]: E0220 00:10:08.486347 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j2l2p" podUID="cee716c2-1a9a-4944-9b9f-06284973b167" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.486463 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:10:08 crc kubenswrapper[5107]: E0220 00:10:08.486632 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.565820 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.565886 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.565905 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.565930 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.565948 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:08Z","lastTransitionTime":"2026-02-20T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.668486 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.668548 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.668565 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.668584 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.668597 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:08Z","lastTransitionTime":"2026-02-20T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.770441 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.770508 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.770526 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.770562 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.770582 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:08Z","lastTransitionTime":"2026-02-20T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.873374 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.873448 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.873467 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.873491 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.873511 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:08Z","lastTransitionTime":"2026-02-20T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.975530 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.975617 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.975639 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.975666 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:08 crc kubenswrapper[5107]: I0220 00:10:08.975685 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:08Z","lastTransitionTime":"2026-02-20T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.078783 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.078846 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.078870 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.078903 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.078926 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:09Z","lastTransitionTime":"2026-02-20T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.180958 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.180997 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.181007 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.181024 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.181036 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:09Z","lastTransitionTime":"2026-02-20T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.282741 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.283057 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.283078 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.283104 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.283122 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:09Z","lastTransitionTime":"2026-02-20T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.385430 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.385492 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.385512 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.385540 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.385559 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:09Z","lastTransitionTime":"2026-02-20T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.485498 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 20 00:10:09 crc kubenswrapper[5107]: E0220 00:10:09.485681 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.488296 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.488361 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.488380 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.488405 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.488423 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:09Z","lastTransitionTime":"2026-02-20T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.531229 5107 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.591300 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.591353 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.591367 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.591385 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.591396 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:09Z","lastTransitionTime":"2026-02-20T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.694233 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.694309 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.694333 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.694359 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.694378 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:09Z","lastTransitionTime":"2026-02-20T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.796378 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.796421 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.796431 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.796446 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.796457 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:09Z","lastTransitionTime":"2026-02-20T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.899240 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.899302 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.899325 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.899349 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.899368 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:09Z","lastTransitionTime":"2026-02-20T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.965564 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glc89" event={"ID":"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf","Type":"ContainerStarted","Data":"cbb81ffa831c22f321eaa8ce88f7ce0c8849e0f376587a9a80c09113a0c3795b"} Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.966076 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.966117 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.975137 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jh827" event={"ID":"325c1728-1be4-421f-9dcb-514bea2da8b7","Type":"ContainerStarted","Data":"2e46f31b495f4c8d2e91baf109598c3107c817a6da695675e636c37c809ac345"} Feb 20 00:10:09 crc kubenswrapper[5107]: I0220 00:10:09.994542 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-glc89" podStartSLOduration=73.994525443 podStartE2EDuration="1m13.994525443s" podCreationTimestamp="2026-02-20 00:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:09.993407732 +0000 UTC m=+96.362065358" watchObservedRunningTime="2026-02-20 00:10:09.994525443 +0000 UTC m=+96.363183009" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.000724 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.001668 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.001696 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.001708 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.001726 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.001739 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:10Z","lastTransitionTime":"2026-02-20T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.104000 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.104049 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.104063 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.104082 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.104094 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:10Z","lastTransitionTime":"2026-02-20T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.181798 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.181869 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.181898 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.181924 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 20 00:10:10 crc kubenswrapper[5107]: E0220 00:10:10.182036 5107 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 20 00:10:10 crc kubenswrapper[5107]: E0220 00:10:10.182052 5107 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 20 00:10:10 crc kubenswrapper[5107]: E0220 00:10:10.182093 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 20 00:10:10 crc kubenswrapper[5107]: E0220 00:10:10.182122 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-20 00:10:18.182105906 +0000 UTC m=+104.550763482 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 20 00:10:10 crc kubenswrapper[5107]: E0220 00:10:10.182166 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-20 00:10:18.182158228 +0000 UTC m=+104.550815804 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 20 00:10:10 crc kubenswrapper[5107]: E0220 00:10:10.182123 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 20 00:10:10 crc kubenswrapper[5107]: E0220 00:10:10.182292 5107 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:10:10 crc kubenswrapper[5107]: E0220 00:10:10.182213 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 20 00:10:10 crc kubenswrapper[5107]: E0220 00:10:10.182430 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 20 00:10:10 crc kubenswrapper[5107]: E0220 00:10:10.182468 5107 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:10:10 crc kubenswrapper[5107]: E0220 00:10:10.182494 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-20 00:10:18.182453516 +0000 UTC m=+104.551111092 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:10:10 crc kubenswrapper[5107]: E0220 00:10:10.182565 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-20 00:10:18.182539948 +0000 UTC m=+104.551197524 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.206669 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.206767 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.206795 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.206824 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.206843 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:10Z","lastTransitionTime":"2026-02-20T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.282887 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.283031 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cee716c2-1a9a-4944-9b9f-06284973b167-metrics-certs\") pod \"network-metrics-daemon-j2l2p\" (UID: \"cee716c2-1a9a-4944-9b9f-06284973b167\") " pod="openshift-multus/network-metrics-daemon-j2l2p" Feb 20 00:10:10 crc kubenswrapper[5107]: E0220 00:10:10.283217 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:18.283177315 +0000 UTC m=+104.651834901 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:10 crc kubenswrapper[5107]: E0220 00:10:10.283272 5107 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 00:10:10 crc kubenswrapper[5107]: E0220 00:10:10.283386 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cee716c2-1a9a-4944-9b9f-06284973b167-metrics-certs podName:cee716c2-1a9a-4944-9b9f-06284973b167 nodeName:}" failed. No retries permitted until 2026-02-20 00:10:18.2833572 +0000 UTC m=+104.652014806 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cee716c2-1a9a-4944-9b9f-06284973b167-metrics-certs") pod "network-metrics-daemon-j2l2p" (UID: "cee716c2-1a9a-4944-9b9f-06284973b167") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.309315 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.309397 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.309416 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.309441 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.309460 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:10Z","lastTransitionTime":"2026-02-20T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.412519 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.412591 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.412609 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.412634 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.412653 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:10Z","lastTransitionTime":"2026-02-20T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.505271 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j2l2p" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.505334 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.505405 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 20 00:10:10 crc kubenswrapper[5107]: E0220 00:10:10.505621 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 20 00:10:10 crc kubenswrapper[5107]: E0220 00:10:10.505756 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j2l2p" podUID="cee716c2-1a9a-4944-9b9f-06284973b167" Feb 20 00:10:10 crc kubenswrapper[5107]: E0220 00:10:10.505997 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.515504 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.515571 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.515594 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.515642 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.515665 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:10Z","lastTransitionTime":"2026-02-20T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.618113 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.618211 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.618225 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.618244 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.618259 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:10Z","lastTransitionTime":"2026-02-20T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.720520 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.720582 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.720600 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.720622 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.720640 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:10Z","lastTransitionTime":"2026-02-20T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.823319 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.823412 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.823433 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.823459 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.823478 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:10Z","lastTransitionTime":"2026-02-20T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.925341 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.925407 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.925425 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.925450 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.925469 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:10Z","lastTransitionTime":"2026-02-20T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:10 crc kubenswrapper[5107]: I0220 00:10:10.981171 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.027969 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.028042 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.028066 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.028097 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.028120 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:11Z","lastTransitionTime":"2026-02-20T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.033458 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.130367 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.130437 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.130451 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.130470 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.130484 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:11Z","lastTransitionTime":"2026-02-20T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.233029 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.233080 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.233093 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.233112 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.233126 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:11Z","lastTransitionTime":"2026-02-20T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.336265 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.336323 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.336339 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.336362 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.336378 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:11Z","lastTransitionTime":"2026-02-20T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.438310 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.438357 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.438371 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.438390 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.438407 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:11Z","lastTransitionTime":"2026-02-20T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.485963 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 20 00:10:11 crc kubenswrapper[5107]: E0220 00:10:11.486111 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.540594 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.540654 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.540672 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.540697 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.540715 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:11Z","lastTransitionTime":"2026-02-20T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.643783 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.643850 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.643869 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.643894 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.643914 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:11Z","lastTransitionTime":"2026-02-20T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.746964 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.747431 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.747629 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.747799 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.747949 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:11Z","lastTransitionTime":"2026-02-20T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.850972 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.851037 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.851056 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.851084 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.851102 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:11Z","lastTransitionTime":"2026-02-20T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.953534 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.953604 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.953631 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.953662 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.953686 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:11Z","lastTransitionTime":"2026-02-20T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.987686 5107 generic.go:358] "Generic (PLEG): container finished" podID="325c1728-1be4-421f-9dcb-514bea2da8b7" containerID="2e46f31b495f4c8d2e91baf109598c3107c817a6da695675e636c37c809ac345" exitCode=0 Feb 20 00:10:11 crc kubenswrapper[5107]: I0220 00:10:11.987807 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jh827" event={"ID":"325c1728-1be4-421f-9dcb-514bea2da8b7","Type":"ContainerDied","Data":"2e46f31b495f4c8d2e91baf109598c3107c817a6da695675e636c37c809ac345"} Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.056359 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.056862 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.057062 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.057283 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.057460 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:12Z","lastTransitionTime":"2026-02-20T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.160017 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.160082 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.160098 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.160118 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.160132 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:12Z","lastTransitionTime":"2026-02-20T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.262845 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.262881 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.262890 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.262904 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.262913 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:12Z","lastTransitionTime":"2026-02-20T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.364422 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.364478 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.364491 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.364510 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.364522 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:12Z","lastTransitionTime":"2026-02-20T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.467113 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.467167 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.467177 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.467189 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.467198 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:12Z","lastTransitionTime":"2026-02-20T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.485417 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j2l2p" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.485417 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 20 00:10:12 crc kubenswrapper[5107]: E0220 00:10:12.485599 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j2l2p" podUID="cee716c2-1a9a-4944-9b9f-06284973b167" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.485983 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:10:12 crc kubenswrapper[5107]: E0220 00:10:12.486056 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 20 00:10:12 crc kubenswrapper[5107]: E0220 00:10:12.486126 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.570061 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.570128 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.570172 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.570198 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.570216 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:12Z","lastTransitionTime":"2026-02-20T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.674451 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.674499 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.674511 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.674529 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.674540 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:12Z","lastTransitionTime":"2026-02-20T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.776987 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.777257 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.777380 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.777475 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.777575 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:12Z","lastTransitionTime":"2026-02-20T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.845104 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-j2l2p"] Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.879225 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.879282 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.879300 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.879321 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.879338 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:12Z","lastTransitionTime":"2026-02-20T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.980876 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.980918 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.980929 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.980946 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.980958 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:12Z","lastTransitionTime":"2026-02-20T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.995566 5107 generic.go:358] "Generic (PLEG): container finished" podID="325c1728-1be4-421f-9dcb-514bea2da8b7" containerID="18532c585490d2467e65f017a07ffcb2ae91494421f978386973c6d12fd1ac47" exitCode=0 Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.995656 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jh827" event={"ID":"325c1728-1be4-421f-9dcb-514bea2da8b7","Type":"ContainerDied","Data":"18532c585490d2467e65f017a07ffcb2ae91494421f978386973c6d12fd1ac47"} Feb 20 00:10:12 crc kubenswrapper[5107]: I0220 00:10:12.995918 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j2l2p" Feb 20 00:10:12 crc kubenswrapper[5107]: E0220 00:10:12.998231 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j2l2p" podUID="cee716c2-1a9a-4944-9b9f-06284973b167" Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.089927 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.090399 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.090416 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.090433 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.090446 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:13Z","lastTransitionTime":"2026-02-20T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.192686 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.192815 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.192828 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.192844 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.192853 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:13Z","lastTransitionTime":"2026-02-20T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.296320 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.296371 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.296387 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.296409 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.296425 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:13Z","lastTransitionTime":"2026-02-20T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.398907 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.398965 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.398983 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.399007 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.399027 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:13Z","lastTransitionTime":"2026-02-20T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.485499 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 20 00:10:13 crc kubenswrapper[5107]: E0220 00:10:13.485677 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.486361 5107 scope.go:117] "RemoveContainer" containerID="dedae9d10992c0717bf9a6a55742b97566a7e6ea9660a223cd9df127ca3dc627" Feb 20 00:10:13 crc kubenswrapper[5107]: E0220 00:10:13.486777 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.500869 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.500919 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.500937 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.500958 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.500975 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:13Z","lastTransitionTime":"2026-02-20T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.603247 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.603311 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.603330 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.603357 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.603375 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:13Z","lastTransitionTime":"2026-02-20T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.706071 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.706135 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.706221 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.706251 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.706273 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:13Z","lastTransitionTime":"2026-02-20T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.810735 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.810825 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.810851 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.810883 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.810907 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:13Z","lastTransitionTime":"2026-02-20T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.913595 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.913661 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.913683 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.913709 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:13 crc kubenswrapper[5107]: I0220 00:10:13.913727 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:13Z","lastTransitionTime":"2026-02-20T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.005912 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jh827" event={"ID":"325c1728-1be4-421f-9dcb-514bea2da8b7","Type":"ContainerStarted","Data":"f565a9d23fbe4f01845320b40b1d74c454fe01315461dbb563572eaa269606fc"} Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.016325 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.016396 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.016423 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.016458 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.016481 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:14Z","lastTransitionTime":"2026-02-20T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.040831 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-jh827" podStartSLOduration=78.040805653 podStartE2EDuration="1m18.040805653s" podCreationTimestamp="2026-02-20 00:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:14.039397724 +0000 UTC m=+100.408055360" watchObservedRunningTime="2026-02-20 00:10:14.040805653 +0000 UTC m=+100.409463259" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.118953 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.119026 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.119045 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.119071 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.119091 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:14Z","lastTransitionTime":"2026-02-20T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.221544 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.221610 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.221625 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.221647 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.221663 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:14Z","lastTransitionTime":"2026-02-20T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.257435 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.257508 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.257531 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.257556 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.257574 5107 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-20T00:10:14Z","lastTransitionTime":"2026-02-20T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.317698 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p8mx9"] Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.324481 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p8mx9" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.326414 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.327056 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.328312 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.329985 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.456992 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4a473823-4b90-47f0-9a5b-d61ddaadef0d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-p8mx9\" (UID: \"4a473823-4b90-47f0-9a5b-d61ddaadef0d\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p8mx9" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.457082 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4a473823-4b90-47f0-9a5b-d61ddaadef0d-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-p8mx9\" (UID: \"4a473823-4b90-47f0-9a5b-d61ddaadef0d\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p8mx9" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.457110 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4a473823-4b90-47f0-9a5b-d61ddaadef0d-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-p8mx9\" (UID: \"4a473823-4b90-47f0-9a5b-d61ddaadef0d\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p8mx9" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.457131 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a473823-4b90-47f0-9a5b-d61ddaadef0d-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-p8mx9\" (UID: \"4a473823-4b90-47f0-9a5b-d61ddaadef0d\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p8mx9" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.457294 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a473823-4b90-47f0-9a5b-d61ddaadef0d-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-p8mx9\" (UID: \"4a473823-4b90-47f0-9a5b-d61ddaadef0d\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p8mx9" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.461835 5107 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.472162 5107 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.487133 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j2l2p" Feb 20 00:10:14 crc kubenswrapper[5107]: E0220 00:10:14.487512 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j2l2p" podUID="cee716c2-1a9a-4944-9b9f-06284973b167" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.487259 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.487236 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:10:14 crc kubenswrapper[5107]: E0220 00:10:14.487883 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 20 00:10:14 crc kubenswrapper[5107]: E0220 00:10:14.488336 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.558758 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4a473823-4b90-47f0-9a5b-d61ddaadef0d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-p8mx9\" (UID: \"4a473823-4b90-47f0-9a5b-d61ddaadef0d\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p8mx9" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.558983 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4a473823-4b90-47f0-9a5b-d61ddaadef0d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-p8mx9\" (UID: \"4a473823-4b90-47f0-9a5b-d61ddaadef0d\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p8mx9" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.559003 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4a473823-4b90-47f0-9a5b-d61ddaadef0d-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-p8mx9\" (UID: \"4a473823-4b90-47f0-9a5b-d61ddaadef0d\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p8mx9" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.559111 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4a473823-4b90-47f0-9a5b-d61ddaadef0d-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-p8mx9\" (UID: \"4a473823-4b90-47f0-9a5b-d61ddaadef0d\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p8mx9" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.559283 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4a473823-4b90-47f0-9a5b-d61ddaadef0d-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-p8mx9\" (UID: \"4a473823-4b90-47f0-9a5b-d61ddaadef0d\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p8mx9" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.559482 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a473823-4b90-47f0-9a5b-d61ddaadef0d-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-p8mx9\" (UID: \"4a473823-4b90-47f0-9a5b-d61ddaadef0d\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p8mx9" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.559660 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a473823-4b90-47f0-9a5b-d61ddaadef0d-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-p8mx9\" (UID: \"4a473823-4b90-47f0-9a5b-d61ddaadef0d\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p8mx9" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.560866 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4a473823-4b90-47f0-9a5b-d61ddaadef0d-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-p8mx9\" (UID: \"4a473823-4b90-47f0-9a5b-d61ddaadef0d\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p8mx9" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.569974 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a473823-4b90-47f0-9a5b-d61ddaadef0d-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-p8mx9\" (UID: \"4a473823-4b90-47f0-9a5b-d61ddaadef0d\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p8mx9" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.581547 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a473823-4b90-47f0-9a5b-d61ddaadef0d-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-p8mx9\" (UID: \"4a473823-4b90-47f0-9a5b-d61ddaadef0d\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p8mx9" Feb 20 00:10:14 crc kubenswrapper[5107]: I0220 00:10:14.653241 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p8mx9" Feb 20 00:10:15 crc kubenswrapper[5107]: I0220 00:10:15.010324 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p8mx9" event={"ID":"4a473823-4b90-47f0-9a5b-d61ddaadef0d","Type":"ContainerStarted","Data":"61b75d3431a6cffe70ff5bf63c0a7be603696c4dfae22170f882e3c12ec9b06b"} Feb 20 00:10:15 crc kubenswrapper[5107]: I0220 00:10:15.486008 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 20 00:10:15 crc kubenswrapper[5107]: E0220 00:10:15.486543 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 20 00:10:16 crc kubenswrapper[5107]: I0220 00:10:16.017299 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p8mx9" event={"ID":"4a473823-4b90-47f0-9a5b-d61ddaadef0d","Type":"ContainerStarted","Data":"e9836712440c31eda5cee6099e6ff6466999c017cc3b1cce5565726339d355ec"} Feb 20 00:10:16 crc kubenswrapper[5107]: I0220 00:10:16.038434 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p8mx9" podStartSLOduration=80.038406378 podStartE2EDuration="1m20.038406378s" podCreationTimestamp="2026-02-20 00:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:16.037183664 +0000 UTC m=+102.405841290" watchObservedRunningTime="2026-02-20 00:10:16.038406378 +0000 UTC m=+102.407063974" Feb 20 00:10:16 crc kubenswrapper[5107]: I0220 00:10:16.318711 5107 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Feb 20 00:10:16 crc kubenswrapper[5107]: I0220 00:10:16.485275 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:10:16 crc kubenswrapper[5107]: E0220 00:10:16.485415 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 20 00:10:16 crc kubenswrapper[5107]: I0220 00:10:16.485515 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j2l2p" Feb 20 00:10:16 crc kubenswrapper[5107]: E0220 00:10:16.485748 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j2l2p" podUID="cee716c2-1a9a-4944-9b9f-06284973b167" Feb 20 00:10:16 crc kubenswrapper[5107]: I0220 00:10:16.485780 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 20 00:10:16 crc kubenswrapper[5107]: E0220 00:10:16.486130 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.486242 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 20 00:10:17 crc kubenswrapper[5107]: E0220 00:10:17.486848 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.827821 5107 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.828026 5107 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.888908 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-k2bhq"] Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.893044 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-szvd6"] Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.893306 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-k2bhq" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.896489 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-nksz9"] Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.896962 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-szvd6" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.899563 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.900050 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.900080 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29525760-tv4jx"] Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.900309 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-nksz9" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.901194 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.904698 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.907027 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.907039 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rd2fd"] Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.907230 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29525760-tv4jx" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.909354 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.910179 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.912476 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-stc6t"] Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.912837 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rd2fd" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.915342 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.915519 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.916998 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-8zn62"] Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.917427 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-stc6t" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.919950 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-5tc5g"] Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.920624 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"serviceca\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.921679 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.921727 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.921683 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.922605 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-kgrwk"] Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.922878 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zn62" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.925413 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-kgrwk" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.925672 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.925677 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5tc5g" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.926928 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.928190 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.928568 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.929750 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-4cqtl"] Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.930092 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.930320 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.930412 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.930588 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.930629 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.930759 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"pruner-dockercfg-rs58m\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.930770 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.930904 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.931167 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.931302 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.931394 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.933014 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-p7fkg"] Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.933809 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.938918 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-f44cp"] Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.940025 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.940282 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-p7fkg" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.950393 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.950493 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.950575 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.950641 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.960289 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.963937 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-c9dcq"] Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.970362 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.970433 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.970531 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.970611 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.970854 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.970864 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c-image-import-ca\") pod \"apiserver-9ddfb9f55-4cqtl\" (UID: \"4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.970928 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt5mb\" (UniqueName: \"kubernetes.io/projected/9060b267-3596-4e76-a820-051838b5a5d9-kube-api-access-kt5mb\") pod \"machine-approver-54c688565-k2bhq\" (UID: \"9060b267-3596-4e76-a820-051838b5a5d9\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-k2bhq" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.970963 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c-encryption-config\") pod \"apiserver-9ddfb9f55-4cqtl\" (UID: \"4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.970981 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4t26\" (UniqueName: \"kubernetes.io/projected/1094e93d-2606-43c0-8b23-334bab811610-kube-api-access-d4t26\") pod \"image-pruner-29525760-tv4jx\" (UID: \"1094e93d-2606-43c0-8b23-334bab811610\") " pod="openshift-image-registry/image-pruner-29525760-tv4jx" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.970997 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3ad513ca-9766-436c-9df6-b59c408fedc4-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-rd2fd\" (UID: \"3ad513ca-9766-436c-9df6-b59c408fedc4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rd2fd" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971017 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971062 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.970983 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971085 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c4f6c375-0a3f-4a66-908a-ac8180dba919-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-szvd6\" (UID: \"c4f6c375-0a3f-4a66-908a-ac8180dba919\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-szvd6" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971113 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/155d0fc1-3684-40be-aef9-fc97d74cc33c-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-5tc5g\" (UID: \"155d0fc1-3684-40be-aef9-fc97d74cc33c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5tc5g" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971159 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctzgt\" (UniqueName: \"kubernetes.io/projected/155d0fc1-3684-40be-aef9-fc97d74cc33c-kube-api-access-ctzgt\") pod \"cluster-image-registry-operator-86c45576b9-5tc5g\" (UID: \"155d0fc1-3684-40be-aef9-fc97d74cc33c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5tc5g" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971184 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/38c88f45-4bc8-4153-962b-f3449bbb53ad-tmp\") pod \"route-controller-manager-776cdc94d6-stc6t\" (UID: \"38c88f45-4bc8-4153-962b-f3449bbb53ad\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-stc6t" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971203 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl7zz\" (UniqueName: \"kubernetes.io/projected/38c88f45-4bc8-4153-962b-f3449bbb53ad-kube-api-access-dl7zz\") pod \"route-controller-manager-776cdc94d6-stc6t\" (UID: \"38c88f45-4bc8-4153-962b-f3449bbb53ad\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-stc6t" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971257 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01658e78-84f0-4da9-8175-eea829ce2c41-audit-policies\") pod \"apiserver-8596bd845d-8zn62\" (UID: \"01658e78-84f0-4da9-8175-eea829ce2c41\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zn62" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971294 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971315 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dpxj\" (UniqueName: \"kubernetes.io/projected/38dcb891-7354-413d-ba1d-016f0522c1bb-kube-api-access-4dpxj\") pod \"machine-api-operator-755bb95488-nksz9\" (UID: \"38dcb891-7354-413d-ba1d-016f0522c1bb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-nksz9" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971340 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c-node-pullsecrets\") pod \"apiserver-9ddfb9f55-4cqtl\" (UID: \"4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971358 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01658e78-84f0-4da9-8175-eea829ce2c41-serving-cert\") pod \"apiserver-8596bd845d-8zn62\" (UID: \"01658e78-84f0-4da9-8175-eea829ce2c41\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zn62" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971391 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/155d0fc1-3684-40be-aef9-fc97d74cc33c-tmp\") pod \"cluster-image-registry-operator-86c45576b9-5tc5g\" (UID: \"155d0fc1-3684-40be-aef9-fc97d74cc33c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5tc5g" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971409 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c-serving-cert\") pod \"apiserver-9ddfb9f55-4cqtl\" (UID: \"4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971413 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971433 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxpxr\" (UniqueName: \"kubernetes.io/projected/4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c-kube-api-access-bxpxr\") pod \"apiserver-9ddfb9f55-4cqtl\" (UID: \"4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971451 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01658e78-84f0-4da9-8175-eea829ce2c41-audit-dir\") pod \"apiserver-8596bd845d-8zn62\" (UID: \"01658e78-84f0-4da9-8175-eea829ce2c41\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zn62" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971486 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/155d0fc1-3684-40be-aef9-fc97d74cc33c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-5tc5g\" (UID: \"155d0fc1-3684-40be-aef9-fc97d74cc33c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5tc5g" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971504 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c-config\") pod \"apiserver-9ddfb9f55-4cqtl\" (UID: \"4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971532 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38c88f45-4bc8-4153-962b-f3449bbb53ad-client-ca\") pod \"route-controller-manager-776cdc94d6-stc6t\" (UID: \"38c88f45-4bc8-4153-962b-f3449bbb53ad\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-stc6t" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971550 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/01658e78-84f0-4da9-8175-eea829ce2c41-encryption-config\") pod \"apiserver-8596bd845d-8zn62\" (UID: \"01658e78-84f0-4da9-8175-eea829ce2c41\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zn62" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971587 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s95rn\" (UniqueName: \"kubernetes.io/projected/3ad513ca-9766-436c-9df6-b59c408fedc4-kube-api-access-s95rn\") pod \"cluster-samples-operator-6b564684c8-rd2fd\" (UID: \"3ad513ca-9766-436c-9df6-b59c408fedc4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rd2fd" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971617 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c-audit-dir\") pod \"apiserver-9ddfb9f55-4cqtl\" (UID: \"4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971632 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c-etcd-client\") pod \"apiserver-9ddfb9f55-4cqtl\" (UID: \"4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971634 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-f44cp" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971646 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9060b267-3596-4e76-a820-051838b5a5d9-config\") pod \"machine-approver-54c688565-k2bhq\" (UID: \"9060b267-3596-4e76-a820-051838b5a5d9\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-k2bhq" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971667 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c4f6c375-0a3f-4a66-908a-ac8180dba919-tmp\") pod \"controller-manager-65b6cccf98-szvd6\" (UID: \"c4f6c375-0a3f-4a66-908a-ac8180dba919\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-szvd6" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971684 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-4cqtl\" (UID: \"4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971703 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c-audit\") pod \"apiserver-9ddfb9f55-4cqtl\" (UID: \"4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971720 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/1094e93d-2606-43c0-8b23-334bab811610-serviceca\") pod \"image-pruner-29525760-tv4jx\" (UID: \"1094e93d-2606-43c0-8b23-334bab811610\") " pod="openshift-image-registry/image-pruner-29525760-tv4jx" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971742 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4f6c375-0a3f-4a66-908a-ac8180dba919-config\") pod \"controller-manager-65b6cccf98-szvd6\" (UID: \"c4f6c375-0a3f-4a66-908a-ac8180dba919\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-szvd6" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971760 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9060b267-3596-4e76-a820-051838b5a5d9-machine-approver-tls\") pod \"machine-approver-54c688565-k2bhq\" (UID: \"9060b267-3596-4e76-a820-051838b5a5d9\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-k2bhq" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971775 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzsdh\" (UniqueName: \"kubernetes.io/projected/3fabeb78-54c7-4333-af95-7722e3cfffb9-kube-api-access-pzsdh\") pod \"openshift-config-operator-5777786469-kgrwk\" (UID: \"3fabeb78-54c7-4333-af95-7722e3cfffb9\") " pod="openshift-config-operator/openshift-config-operator-5777786469-kgrwk" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971797 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38c88f45-4bc8-4153-962b-f3449bbb53ad-serving-cert\") pod \"route-controller-manager-776cdc94d6-stc6t\" (UID: \"38c88f45-4bc8-4153-962b-f3449bbb53ad\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-stc6t" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971814 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01658e78-84f0-4da9-8175-eea829ce2c41-trusted-ca-bundle\") pod \"apiserver-8596bd845d-8zn62\" (UID: \"01658e78-84f0-4da9-8175-eea829ce2c41\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zn62" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971828 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38dcb891-7354-413d-ba1d-016f0522c1bb-config\") pod \"machine-api-operator-755bb95488-nksz9\" (UID: \"38dcb891-7354-413d-ba1d-016f0522c1bb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-nksz9" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971853 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9060b267-3596-4e76-a820-051838b5a5d9-auth-proxy-config\") pod \"machine-approver-54c688565-k2bhq\" (UID: \"9060b267-3596-4e76-a820-051838b5a5d9\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-k2bhq" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971868 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/38dcb891-7354-413d-ba1d-016f0522c1bb-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-nksz9\" (UID: \"38dcb891-7354-413d-ba1d-016f0522c1bb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-nksz9" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971885 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38c88f45-4bc8-4153-962b-f3449bbb53ad-config\") pod \"route-controller-manager-776cdc94d6-stc6t\" (UID: \"38c88f45-4bc8-4153-962b-f3449bbb53ad\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-stc6t" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971901 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/155d0fc1-3684-40be-aef9-fc97d74cc33c-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-5tc5g\" (UID: \"155d0fc1-3684-40be-aef9-fc97d74cc33c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5tc5g" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971925 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c4f6c375-0a3f-4a66-908a-ac8180dba919-client-ca\") pod \"controller-manager-65b6cccf98-szvd6\" (UID: \"c4f6c375-0a3f-4a66-908a-ac8180dba919\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-szvd6" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971945 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/01658e78-84f0-4da9-8175-eea829ce2c41-etcd-serving-ca\") pod \"apiserver-8596bd845d-8zn62\" (UID: \"01658e78-84f0-4da9-8175-eea829ce2c41\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zn62" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971959 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4tj6\" (UniqueName: \"kubernetes.io/projected/01658e78-84f0-4da9-8175-eea829ce2c41-kube-api-access-x4tj6\") pod \"apiserver-8596bd845d-8zn62\" (UID: \"01658e78-84f0-4da9-8175-eea829ce2c41\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zn62" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971982 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/155d0fc1-3684-40be-aef9-fc97d74cc33c-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-5tc5g\" (UID: \"155d0fc1-3684-40be-aef9-fc97d74cc33c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5tc5g" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971997 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-4cqtl\" (UID: \"4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.972011 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/01658e78-84f0-4da9-8175-eea829ce2c41-etcd-client\") pod \"apiserver-8596bd845d-8zn62\" (UID: \"01658e78-84f0-4da9-8175-eea829ce2c41\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zn62" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.972032 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3fabeb78-54c7-4333-af95-7722e3cfffb9-available-featuregates\") pod \"openshift-config-operator-5777786469-kgrwk\" (UID: \"3fabeb78-54c7-4333-af95-7722e3cfffb9\") " pod="openshift-config-operator/openshift-config-operator-5777786469-kgrwk" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.972048 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4f6c375-0a3f-4a66-908a-ac8180dba919-serving-cert\") pod \"controller-manager-65b6cccf98-szvd6\" (UID: \"c4f6c375-0a3f-4a66-908a-ac8180dba919\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-szvd6" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.972063 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87t9z\" (UniqueName: \"kubernetes.io/projected/c4f6c375-0a3f-4a66-908a-ac8180dba919-kube-api-access-87t9z\") pod \"controller-manager-65b6cccf98-szvd6\" (UID: \"c4f6c375-0a3f-4a66-908a-ac8180dba919\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-szvd6" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.972079 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3fabeb78-54c7-4333-af95-7722e3cfffb9-serving-cert\") pod \"openshift-config-operator-5777786469-kgrwk\" (UID: \"3fabeb78-54c7-4333-af95-7722e3cfffb9\") " pod="openshift-config-operator/openshift-config-operator-5777786469-kgrwk" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.972094 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/38dcb891-7354-413d-ba1d-016f0522c1bb-images\") pod \"machine-api-operator-755bb95488-nksz9\" (UID: \"38dcb891-7354-413d-ba1d-016f0522c1bb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-nksz9" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.971926 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.972925 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.973093 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.973914 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-579ws"] Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.980117 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mr4hv"] Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.980384 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c9dcq" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.982929 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-5wdfs"] Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.985073 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-26g2b"] Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.990103 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-zg2vd"] Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.998532 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-26g2b" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.998571 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-579ws" Feb 20 00:10:17 crc kubenswrapper[5107]: I0220 00:10:17.998660 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-5wdfs" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.001945 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mr4hv" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.002044 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-rq6pw"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.002339 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.002565 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.002878 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-zg2vd" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.003325 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.003453 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.003727 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.003847 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.003952 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.004115 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.004245 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.004516 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.004697 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.004946 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.005055 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.005114 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.005168 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.005254 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.007775 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.010413 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.010893 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.012118 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.017306 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tcpmn"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.017789 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-rq6pw" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.017829 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.026605 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-pv5z4"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.028407 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.028553 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.032397 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-sx776"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.042380 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.046506 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tcpmn" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.049478 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.049478 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.049516 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.051564 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.051960 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.052102 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.052604 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.052680 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.052753 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.053597 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.055541 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.055710 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.055845 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.055983 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.057631 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.057760 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.058775 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.059574 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-dp7hb"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.059654 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-pv5z4" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.059755 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-sx776" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.061115 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.061278 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.061378 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.061497 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.061615 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.061676 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.061818 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.061935 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.061998 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-c42nb"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.064591 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-b778b"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.064701 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-dp7hb" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.064973 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.065581 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.066317 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.066469 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.066771 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.066915 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-4sdjl"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.067710 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-c42nb" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.068234 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.069191 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.069840 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-kv84j"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.070644 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-b778b" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.072548 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c4f6c375-0a3f-4a66-908a-ac8180dba919-client-ca\") pod \"controller-manager-65b6cccf98-szvd6\" (UID: \"c4f6c375-0a3f-4a66-908a-ac8180dba919\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-szvd6" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.072575 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b78a9450-1db5-496b-a8e7-2c12d8e5525f-webhook-certs\") pod \"multus-admission-controller-69db94689b-zg2vd\" (UID: \"b78a9450-1db5-496b-a8e7-2c12d8e5525f\") " pod="openshift-multus/multus-admission-controller-69db94689b-zg2vd" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.072594 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz2nf\" (UniqueName: \"kubernetes.io/projected/b3931e83-9df1-49f4-8f33-5ca09792a062-kube-api-access-lz2nf\") pod \"authentication-operator-7f5c659b84-c9dcq\" (UID: \"b3931e83-9df1-49f4-8f33-5ca09792a062\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c9dcq" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.073165 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-cvxll"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.073266 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/01658e78-84f0-4da9-8175-eea829ce2c41-etcd-serving-ca\") pod \"apiserver-8596bd845d-8zn62\" (UID: \"01658e78-84f0-4da9-8175-eea829ce2c41\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zn62" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.073315 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c4f6c375-0a3f-4a66-908a-ac8180dba919-client-ca\") pod \"controller-manager-65b6cccf98-szvd6\" (UID: \"c4f6c375-0a3f-4a66-908a-ac8180dba919\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-szvd6" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.073404 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-kv84j" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.073585 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-4sdjl" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.073594 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/01658e78-84f0-4da9-8175-eea829ce2c41-etcd-serving-ca\") pod \"apiserver-8596bd845d-8zn62\" (UID: \"01658e78-84f0-4da9-8175-eea829ce2c41\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zn62" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.073920 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x4tj6\" (UniqueName: \"kubernetes.io/projected/01658e78-84f0-4da9-8175-eea829ce2c41-kube-api-access-x4tj6\") pod \"apiserver-8596bd845d-8zn62\" (UID: \"01658e78-84f0-4da9-8175-eea829ce2c41\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zn62" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.073952 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmh95\" (UniqueName: \"kubernetes.io/projected/a0f08f8e-efec-4c49-b5e8-7ac2b96bb5ee-kube-api-access-zmh95\") pod \"service-ca-74545575db-pv5z4\" (UID: \"a0f08f8e-efec-4c49-b5e8-7ac2b96bb5ee\") " pod="openshift-service-ca/service-ca-74545575db-pv5z4" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.073999 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/155d0fc1-3684-40be-aef9-fc97d74cc33c-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-5tc5g\" (UID: \"155d0fc1-3684-40be-aef9-fc97d74cc33c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5tc5g" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.074024 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-4cqtl\" (UID: \"4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.074047 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/01658e78-84f0-4da9-8175-eea829ce2c41-etcd-client\") pod \"apiserver-8596bd845d-8zn62\" (UID: \"01658e78-84f0-4da9-8175-eea829ce2c41\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zn62" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.074074 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6623692b-4959-4045-8da6-f64819b323e9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-tcpmn\" (UID: \"6623692b-4959-4045-8da6-f64819b323e9\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tcpmn" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.074113 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3fabeb78-54c7-4333-af95-7722e3cfffb9-available-featuregates\") pod \"openshift-config-operator-5777786469-kgrwk\" (UID: \"3fabeb78-54c7-4333-af95-7722e3cfffb9\") " pod="openshift-config-operator/openshift-config-operator-5777786469-kgrwk" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.075391 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-4cqtl\" (UID: \"4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.079817 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3fabeb78-54c7-4333-af95-7722e3cfffb9-available-featuregates\") pod \"openshift-config-operator-5777786469-kgrwk\" (UID: \"3fabeb78-54c7-4333-af95-7722e3cfffb9\") " pod="openshift-config-operator/openshift-config-operator-5777786469-kgrwk" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.081277 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525760-gk87r"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.082545 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4f6c375-0a3f-4a66-908a-ac8180dba919-serving-cert\") pod \"controller-manager-65b6cccf98-szvd6\" (UID: \"c4f6c375-0a3f-4a66-908a-ac8180dba919\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-szvd6" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.082639 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-87t9z\" (UniqueName: \"kubernetes.io/projected/c4f6c375-0a3f-4a66-908a-ac8180dba919-kube-api-access-87t9z\") pod \"controller-manager-65b6cccf98-szvd6\" (UID: \"c4f6c375-0a3f-4a66-908a-ac8180dba919\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-szvd6" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.082669 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3fabeb78-54c7-4333-af95-7722e3cfffb9-serving-cert\") pod \"openshift-config-operator-5777786469-kgrwk\" (UID: \"3fabeb78-54c7-4333-af95-7722e3cfffb9\") " pod="openshift-config-operator/openshift-config-operator-5777786469-kgrwk" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.082697 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/38dcb891-7354-413d-ba1d-016f0522c1bb-images\") pod \"machine-api-operator-755bb95488-nksz9\" (UID: \"38dcb891-7354-413d-ba1d-016f0522c1bb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-nksz9" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.082730 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6fe8d69b-d257-4f34-b535-177002797675-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-mr4hv\" (UID: \"6fe8d69b-d257-4f34-b535-177002797675\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mr4hv" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.082769 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c-image-import-ca\") pod \"apiserver-9ddfb9f55-4cqtl\" (UID: \"4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.082804 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kt5mb\" (UniqueName: \"kubernetes.io/projected/9060b267-3596-4e76-a820-051838b5a5d9-kube-api-access-kt5mb\") pod \"machine-approver-54c688565-k2bhq\" (UID: \"9060b267-3596-4e76-a820-051838b5a5d9\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-k2bhq" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.082832 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqsnk\" (UniqueName: \"kubernetes.io/projected/e0875e0e-3239-42ce-b8d5-aecba1e04f68-kube-api-access-vqsnk\") pod \"kube-storage-version-migrator-operator-565b79b866-26g2b\" (UID: \"e0875e0e-3239-42ce-b8d5-aecba1e04f68\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-26g2b" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.082881 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c-encryption-config\") pod \"apiserver-9ddfb9f55-4cqtl\" (UID: \"4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.082906 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d4t26\" (UniqueName: \"kubernetes.io/projected/1094e93d-2606-43c0-8b23-334bab811610-kube-api-access-d4t26\") pod \"image-pruner-29525760-tv4jx\" (UID: \"1094e93d-2606-43c0-8b23-334bab811610\") " pod="openshift-image-registry/image-pruner-29525760-tv4jx" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.082929 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3ad513ca-9766-436c-9df6-b59c408fedc4-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-rd2fd\" (UID: \"3ad513ca-9766-436c-9df6-b59c408fedc4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rd2fd" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.082960 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c4f6c375-0a3f-4a66-908a-ac8180dba919-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-szvd6\" (UID: \"c4f6c375-0a3f-4a66-908a-ac8180dba919\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-szvd6" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.082983 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/155d0fc1-3684-40be-aef9-fc97d74cc33c-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-5tc5g\" (UID: \"155d0fc1-3684-40be-aef9-fc97d74cc33c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5tc5g" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.083007 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb27m\" (UniqueName: \"kubernetes.io/projected/88f0f58c-020c-4ce1-9f3d-3aa63ae92ddd-kube-api-access-wb27m\") pod \"machine-config-operator-67c9d58cbb-579ws\" (UID: \"88f0f58c-020c-4ce1-9f3d-3aa63ae92ddd\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-579ws" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.083033 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dccf73a9-9c9e-4211-8b18-ed7d205bf9d1-serving-cert\") pod \"console-operator-67c89758df-rq6pw\" (UID: \"dccf73a9-9c9e-4211-8b18-ed7d205bf9d1\") " pod="openshift-console-operator/console-operator-67c89758df-rq6pw" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.083056 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zvb9\" (UniqueName: \"kubernetes.io/projected/cfdfef0c-4111-4a89-aa5a-bf317fc4a772-kube-api-access-8zvb9\") pod \"downloads-747b44746d-p7fkg\" (UID: \"cfdfef0c-4111-4a89-aa5a-bf317fc4a772\") " pod="openshift-console/downloads-747b44746d-p7fkg" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.083088 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ctzgt\" (UniqueName: \"kubernetes.io/projected/155d0fc1-3684-40be-aef9-fc97d74cc33c-kube-api-access-ctzgt\") pod \"cluster-image-registry-operator-86c45576b9-5tc5g\" (UID: \"155d0fc1-3684-40be-aef9-fc97d74cc33c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5tc5g" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.083113 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/38c88f45-4bc8-4153-962b-f3449bbb53ad-tmp\") pod \"route-controller-manager-776cdc94d6-stc6t\" (UID: \"38c88f45-4bc8-4153-962b-f3449bbb53ad\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-stc6t" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.083137 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dl7zz\" (UniqueName: \"kubernetes.io/projected/38c88f45-4bc8-4153-962b-f3449bbb53ad-kube-api-access-dl7zz\") pod \"route-controller-manager-776cdc94d6-stc6t\" (UID: \"38c88f45-4bc8-4153-962b-f3449bbb53ad\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-stc6t" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.083181 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3931e83-9df1-49f4-8f33-5ca09792a062-config\") pod \"authentication-operator-7f5c659b84-c9dcq\" (UID: \"b3931e83-9df1-49f4-8f33-5ca09792a062\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c9dcq" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.083218 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01658e78-84f0-4da9-8175-eea829ce2c41-audit-policies\") pod \"apiserver-8596bd845d-8zn62\" (UID: \"01658e78-84f0-4da9-8175-eea829ce2c41\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zn62" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.083243 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4dpxj\" (UniqueName: \"kubernetes.io/projected/38dcb891-7354-413d-ba1d-016f0522c1bb-kube-api-access-4dpxj\") pod \"machine-api-operator-755bb95488-nksz9\" (UID: \"38dcb891-7354-413d-ba1d-016f0522c1bb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-nksz9" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.083265 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dccf73a9-9c9e-4211-8b18-ed7d205bf9d1-config\") pod \"console-operator-67c89758df-rq6pw\" (UID: \"dccf73a9-9c9e-4211-8b18-ed7d205bf9d1\") " pod="openshift-console-operator/console-operator-67c89758df-rq6pw" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.083291 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c-node-pullsecrets\") pod \"apiserver-9ddfb9f55-4cqtl\" (UID: \"4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.083340 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01658e78-84f0-4da9-8175-eea829ce2c41-serving-cert\") pod \"apiserver-8596bd845d-8zn62\" (UID: \"01658e78-84f0-4da9-8175-eea829ce2c41\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zn62" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.083364 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2aaa38bf-6f3d-4f28-8634-564d553f87a6-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-f44cp\" (UID: \"2aaa38bf-6f3d-4f28-8634-564d553f87a6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-f44cp" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.083392 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/155d0fc1-3684-40be-aef9-fc97d74cc33c-tmp\") pod \"cluster-image-registry-operator-86c45576b9-5tc5g\" (UID: \"155d0fc1-3684-40be-aef9-fc97d74cc33c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5tc5g" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.083419 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c-serving-cert\") pod \"apiserver-9ddfb9f55-4cqtl\" (UID: \"4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.083444 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3931e83-9df1-49f4-8f33-5ca09792a062-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-c9dcq\" (UID: \"b3931e83-9df1-49f4-8f33-5ca09792a062\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c9dcq" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.083474 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bxpxr\" (UniqueName: \"kubernetes.io/projected/4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c-kube-api-access-bxpxr\") pod \"apiserver-9ddfb9f55-4cqtl\" (UID: \"4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.083498 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01658e78-84f0-4da9-8175-eea829ce2c41-audit-dir\") pod \"apiserver-8596bd845d-8zn62\" (UID: \"01658e78-84f0-4da9-8175-eea829ce2c41\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zn62" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.083528 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwkzk\" (UniqueName: \"kubernetes.io/projected/6623692b-4959-4045-8da6-f64819b323e9-kube-api-access-nwkzk\") pod \"control-plane-machine-set-operator-75ffdb6fcd-tcpmn\" (UID: \"6623692b-4959-4045-8da6-f64819b323e9\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tcpmn" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.083552 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84d08a6b-ab51-44fb-a216-4a9beb9e9141-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-5wdfs\" (UID: \"84d08a6b-ab51-44fb-a216-4a9beb9e9141\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-5wdfs" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.083582 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/155d0fc1-3684-40be-aef9-fc97d74cc33c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-5tc5g\" (UID: \"155d0fc1-3684-40be-aef9-fc97d74cc33c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5tc5g" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.083610 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c-config\") pod \"apiserver-9ddfb9f55-4cqtl\" (UID: \"4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.083634 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3931e83-9df1-49f4-8f33-5ca09792a062-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-c9dcq\" (UID: \"b3931e83-9df1-49f4-8f33-5ca09792a062\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c9dcq" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.083657 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3931e83-9df1-49f4-8f33-5ca09792a062-serving-cert\") pod \"authentication-operator-7f5c659b84-c9dcq\" (UID: \"b3931e83-9df1-49f4-8f33-5ca09792a062\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c9dcq" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.083664 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-kdptd"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.083690 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38c88f45-4bc8-4153-962b-f3449bbb53ad-client-ca\") pod \"route-controller-manager-776cdc94d6-stc6t\" (UID: \"38c88f45-4bc8-4153-962b-f3449bbb53ad\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-stc6t" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.084227 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/01658e78-84f0-4da9-8175-eea829ce2c41-encryption-config\") pod \"apiserver-8596bd845d-8zn62\" (UID: \"01658e78-84f0-4da9-8175-eea829ce2c41\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zn62" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.084280 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/88f0f58c-020c-4ce1-9f3d-3aa63ae92ddd-images\") pod \"machine-config-operator-67c9d58cbb-579ws\" (UID: \"88f0f58c-020c-4ce1-9f3d-3aa63ae92ddd\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-579ws" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.084353 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s95rn\" (UniqueName: \"kubernetes.io/projected/3ad513ca-9766-436c-9df6-b59c408fedc4-kube-api-access-s95rn\") pod \"cluster-samples-operator-6b564684c8-rd2fd\" (UID: \"3ad513ca-9766-436c-9df6-b59c408fedc4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rd2fd" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.084393 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dccf73a9-9c9e-4211-8b18-ed7d205bf9d1-trusted-ca\") pod \"console-operator-67c89758df-rq6pw\" (UID: \"dccf73a9-9c9e-4211-8b18-ed7d205bf9d1\") " pod="openshift-console-operator/console-operator-67c89758df-rq6pw" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.084511 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c-audit-dir\") pod \"apiserver-9ddfb9f55-4cqtl\" (UID: \"4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.084561 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c-etcd-client\") pod \"apiserver-9ddfb9f55-4cqtl\" (UID: \"4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.085778 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c-node-pullsecrets\") pod \"apiserver-9ddfb9f55-4cqtl\" (UID: \"4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.084585 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9060b267-3596-4e76-a820-051838b5a5d9-config\") pod \"machine-approver-54c688565-k2bhq\" (UID: \"9060b267-3596-4e76-a820-051838b5a5d9\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-k2bhq" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.103424 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c-encryption-config\") pod \"apiserver-9ddfb9f55-4cqtl\" (UID: \"4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.104833 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/38c88f45-4bc8-4153-962b-f3449bbb53ad-tmp\") pod \"route-controller-manager-776cdc94d6-stc6t\" (UID: \"38c88f45-4bc8-4153-962b-f3449bbb53ad\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-stc6t" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.105514 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c4f6c375-0a3f-4a66-908a-ac8180dba919-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-szvd6\" (UID: \"c4f6c375-0a3f-4a66-908a-ac8180dba919\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-szvd6" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.105800 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01658e78-84f0-4da9-8175-eea829ce2c41-audit-policies\") pod \"apiserver-8596bd845d-8zn62\" (UID: \"01658e78-84f0-4da9-8175-eea829ce2c41\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zn62" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.105953 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4f6c375-0a3f-4a66-908a-ac8180dba919-serving-cert\") pod \"controller-manager-65b6cccf98-szvd6\" (UID: \"c4f6c375-0a3f-4a66-908a-ac8180dba919\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-szvd6" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.106467 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3ad513ca-9766-436c-9df6-b59c408fedc4-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-rd2fd\" (UID: \"3ad513ca-9766-436c-9df6-b59c408fedc4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rd2fd" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.106739 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/01658e78-84f0-4da9-8175-eea829ce2c41-etcd-client\") pod \"apiserver-8596bd845d-8zn62\" (UID: \"01658e78-84f0-4da9-8175-eea829ce2c41\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zn62" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.106864 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-cvxll" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.106907 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/155d0fc1-3684-40be-aef9-fc97d74cc33c-tmp\") pod \"cluster-image-registry-operator-86c45576b9-5tc5g\" (UID: \"155d0fc1-3684-40be-aef9-fc97d74cc33c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5tc5g" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.108285 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/155d0fc1-3684-40be-aef9-fc97d74cc33c-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-5tc5g\" (UID: \"155d0fc1-3684-40be-aef9-fc97d74cc33c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5tc5g" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.108684 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525760-gk87r" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.108732 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c-audit-dir\") pod \"apiserver-9ddfb9f55-4cqtl\" (UID: \"4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.108914 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9060b267-3596-4e76-a820-051838b5a5d9-config\") pod \"machine-approver-54c688565-k2bhq\" (UID: \"9060b267-3596-4e76-a820-051838b5a5d9\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-k2bhq" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.109801 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c-config\") pod \"apiserver-9ddfb9f55-4cqtl\" (UID: \"4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.110129 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/01658e78-84f0-4da9-8175-eea829ce2c41-encryption-config\") pod \"apiserver-8596bd845d-8zn62\" (UID: \"01658e78-84f0-4da9-8175-eea829ce2c41\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zn62" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.110201 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/88f0f58c-020c-4ce1-9f3d-3aa63ae92ddd-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-579ws\" (UID: \"88f0f58c-020c-4ce1-9f3d-3aa63ae92ddd\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-579ws" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.110351 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c-image-import-ca\") pod \"apiserver-9ddfb9f55-4cqtl\" (UID: \"4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.110525 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84d08a6b-ab51-44fb-a216-4a9beb9e9141-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-5wdfs\" (UID: \"84d08a6b-ab51-44fb-a216-4a9beb9e9141\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-5wdfs" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.110743 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c4f6c375-0a3f-4a66-908a-ac8180dba919-tmp\") pod \"controller-manager-65b6cccf98-szvd6\" (UID: \"c4f6c375-0a3f-4a66-908a-ac8180dba919\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-szvd6" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.110797 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjdjb\" (UniqueName: \"kubernetes.io/projected/dccf73a9-9c9e-4211-8b18-ed7d205bf9d1-kube-api-access-bjdjb\") pod \"console-operator-67c89758df-rq6pw\" (UID: \"dccf73a9-9c9e-4211-8b18-ed7d205bf9d1\") " pod="openshift-console-operator/console-operator-67c89758df-rq6pw" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.110822 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/84d08a6b-ab51-44fb-a216-4a9beb9e9141-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-5wdfs\" (UID: \"84d08a6b-ab51-44fb-a216-4a9beb9e9141\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-5wdfs" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.110903 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38c88f45-4bc8-4153-962b-f3449bbb53ad-client-ca\") pod \"route-controller-manager-776cdc94d6-stc6t\" (UID: \"38c88f45-4bc8-4153-962b-f3449bbb53ad\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-stc6t" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.110955 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-4cqtl\" (UID: \"4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.110978 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/88f0f58c-020c-4ce1-9f3d-3aa63ae92ddd-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-579ws\" (UID: \"88f0f58c-020c-4ce1-9f3d-3aa63ae92ddd\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-579ws" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.110995 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01658e78-84f0-4da9-8175-eea829ce2c41-audit-dir\") pod \"apiserver-8596bd845d-8zn62\" (UID: \"01658e78-84f0-4da9-8175-eea829ce2c41\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zn62" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.111083 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2aaa38bf-6f3d-4f28-8634-564d553f87a6-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-f44cp\" (UID: \"2aaa38bf-6f3d-4f28-8634-564d553f87a6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-f44cp" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.111728 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c4f6c375-0a3f-4a66-908a-ac8180dba919-tmp\") pod \"controller-manager-65b6cccf98-szvd6\" (UID: \"c4f6c375-0a3f-4a66-908a-ac8180dba919\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-szvd6" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.111877 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c-audit\") pod \"apiserver-9ddfb9f55-4cqtl\" (UID: \"4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.112111 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/1094e93d-2606-43c0-8b23-334bab811610-serviceca\") pod \"image-pruner-29525760-tv4jx\" (UID: \"1094e93d-2606-43c0-8b23-334bab811610\") " pod="openshift-image-registry/image-pruner-29525760-tv4jx" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.112166 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6fe8d69b-d257-4f34-b535-177002797675-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-mr4hv\" (UID: \"6fe8d69b-d257-4f34-b535-177002797675\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mr4hv" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.112557 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2aaa38bf-6f3d-4f28-8634-564d553f87a6-config\") pod \"openshift-controller-manager-operator-686468bdd5-f44cp\" (UID: \"2aaa38bf-6f3d-4f28-8634-564d553f87a6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-f44cp" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.112632 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4f6c375-0a3f-4a66-908a-ac8180dba919-config\") pod \"controller-manager-65b6cccf98-szvd6\" (UID: \"c4f6c375-0a3f-4a66-908a-ac8180dba919\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-szvd6" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.112681 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcgt9\" (UniqueName: \"kubernetes.io/projected/6fe8d69b-d257-4f34-b535-177002797675-kube-api-access-pcgt9\") pod \"ingress-operator-6b9cb4dbcf-mr4hv\" (UID: \"6fe8d69b-d257-4f34-b535-177002797675\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mr4hv" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.112980 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9060b267-3596-4e76-a820-051838b5a5d9-machine-approver-tls\") pod \"machine-approver-54c688565-k2bhq\" (UID: \"9060b267-3596-4e76-a820-051838b5a5d9\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-k2bhq" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.113847 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pzsdh\" (UniqueName: \"kubernetes.io/projected/3fabeb78-54c7-4333-af95-7722e3cfffb9-kube-api-access-pzsdh\") pod \"openshift-config-operator-5777786469-kgrwk\" (UID: \"3fabeb78-54c7-4333-af95-7722e3cfffb9\") " pod="openshift-config-operator/openshift-config-operator-5777786469-kgrwk" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.114024 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p72jd\" (UniqueName: \"kubernetes.io/projected/2aaa38bf-6f3d-4f28-8634-564d553f87a6-kube-api-access-p72jd\") pod \"openshift-controller-manager-operator-686468bdd5-f44cp\" (UID: \"2aaa38bf-6f3d-4f28-8634-564d553f87a6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-f44cp" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.113916 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-4cqtl\" (UID: \"4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.114274 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4f6c375-0a3f-4a66-908a-ac8180dba919-config\") pod \"controller-manager-65b6cccf98-szvd6\" (UID: \"c4f6c375-0a3f-4a66-908a-ac8180dba919\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-szvd6" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.114259 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84d08a6b-ab51-44fb-a216-4a9beb9e9141-config\") pod \"kube-controller-manager-operator-69d5f845f8-5wdfs\" (UID: \"84d08a6b-ab51-44fb-a216-4a9beb9e9141\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-5wdfs" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.126774 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38c88f45-4bc8-4153-962b-f3449bbb53ad-serving-cert\") pod \"route-controller-manager-776cdc94d6-stc6t\" (UID: \"38c88f45-4bc8-4153-962b-f3449bbb53ad\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-stc6t" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.126839 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01658e78-84f0-4da9-8175-eea829ce2c41-trusted-ca-bundle\") pod \"apiserver-8596bd845d-8zn62\" (UID: \"01658e78-84f0-4da9-8175-eea829ce2c41\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zn62" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.126870 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38dcb891-7354-413d-ba1d-016f0522c1bb-config\") pod \"machine-api-operator-755bb95488-nksz9\" (UID: \"38dcb891-7354-413d-ba1d-016f0522c1bb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-nksz9" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.127028 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/155d0fc1-3684-40be-aef9-fc97d74cc33c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-5tc5g\" (UID: \"155d0fc1-3684-40be-aef9-fc97d74cc33c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5tc5g" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.127115 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a0f08f8e-efec-4c49-b5e8-7ac2b96bb5ee-signing-key\") pod \"service-ca-74545575db-pv5z4\" (UID: \"a0f08f8e-efec-4c49-b5e8-7ac2b96bb5ee\") " pod="openshift-service-ca/service-ca-74545575db-pv5z4" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.127179 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a0f08f8e-efec-4c49-b5e8-7ac2b96bb5ee-signing-cabundle\") pod \"service-ca-74545575db-pv5z4\" (UID: \"a0f08f8e-efec-4c49-b5e8-7ac2b96bb5ee\") " pod="openshift-service-ca/service-ca-74545575db-pv5z4" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.127229 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9060b267-3596-4e76-a820-051838b5a5d9-auth-proxy-config\") pod \"machine-approver-54c688565-k2bhq\" (UID: \"9060b267-3596-4e76-a820-051838b5a5d9\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-k2bhq" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.127256 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/38dcb891-7354-413d-ba1d-016f0522c1bb-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-nksz9\" (UID: \"38dcb891-7354-413d-ba1d-016f0522c1bb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-nksz9" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.127099 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3fabeb78-54c7-4333-af95-7722e3cfffb9-serving-cert\") pod \"openshift-config-operator-5777786469-kgrwk\" (UID: \"3fabeb78-54c7-4333-af95-7722e3cfffb9\") " pod="openshift-config-operator/openshift-config-operator-5777786469-kgrwk" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.127278 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.127463 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38c88f45-4bc8-4153-962b-f3449bbb53ad-config\") pod \"route-controller-manager-776cdc94d6-stc6t\" (UID: \"38c88f45-4bc8-4153-962b-f3449bbb53ad\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-stc6t" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.127542 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.127953 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38dcb891-7354-413d-ba1d-016f0522c1bb-config\") pod \"machine-api-operator-755bb95488-nksz9\" (UID: \"38dcb891-7354-413d-ba1d-016f0522c1bb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-nksz9" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.128087 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01658e78-84f0-4da9-8175-eea829ce2c41-trusted-ca-bundle\") pod \"apiserver-8596bd845d-8zn62\" (UID: \"01658e78-84f0-4da9-8175-eea829ce2c41\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zn62" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.128423 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9060b267-3596-4e76-a820-051838b5a5d9-auth-proxy-config\") pod \"machine-approver-54c688565-k2bhq\" (UID: \"9060b267-3596-4e76-a820-051838b5a5d9\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-k2bhq" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.128625 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38c88f45-4bc8-4153-962b-f3449bbb53ad-config\") pod \"route-controller-manager-776cdc94d6-stc6t\" (UID: \"38c88f45-4bc8-4153-962b-f3449bbb53ad\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-stc6t" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.128910 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01658e78-84f0-4da9-8175-eea829ce2c41-serving-cert\") pod \"apiserver-8596bd845d-8zn62\" (UID: \"01658e78-84f0-4da9-8175-eea829ce2c41\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zn62" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.128913 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c-serving-cert\") pod \"apiserver-9ddfb9f55-4cqtl\" (UID: \"4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.128976 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.129023 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/38dcb891-7354-413d-ba1d-016f0522c1bb-images\") pod \"machine-api-operator-755bb95488-nksz9\" (UID: \"38dcb891-7354-413d-ba1d-016f0522c1bb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-nksz9" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.129063 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6fe8d69b-d257-4f34-b535-177002797675-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-mr4hv\" (UID: \"6fe8d69b-d257-4f34-b535-177002797675\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mr4hv" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.129097 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-894d9\" (UniqueName: \"kubernetes.io/projected/b78a9450-1db5-496b-a8e7-2c12d8e5525f-kube-api-access-894d9\") pod \"multus-admission-controller-69db94689b-zg2vd\" (UID: \"b78a9450-1db5-496b-a8e7-2c12d8e5525f\") " pod="openshift-multus/multus-admission-controller-69db94689b-zg2vd" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.129116 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0875e0e-3239-42ce-b8d5-aecba1e04f68-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-26g2b\" (UID: \"e0875e0e-3239-42ce-b8d5-aecba1e04f68\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-26g2b" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.129342 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/155d0fc1-3684-40be-aef9-fc97d74cc33c-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-5tc5g\" (UID: \"155d0fc1-3684-40be-aef9-fc97d74cc33c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5tc5g" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.129363 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0875e0e-3239-42ce-b8d5-aecba1e04f68-config\") pod \"kube-storage-version-migrator-operator-565b79b866-26g2b\" (UID: \"e0875e0e-3239-42ce-b8d5-aecba1e04f68\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-26g2b" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.129599 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c-etcd-client\") pod \"apiserver-9ddfb9f55-4cqtl\" (UID: \"4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.129822 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/155d0fc1-3684-40be-aef9-fc97d74cc33c-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-5tc5g\" (UID: \"155d0fc1-3684-40be-aef9-fc97d74cc33c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5tc5g" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.129944 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c-audit\") pod \"apiserver-9ddfb9f55-4cqtl\" (UID: \"4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.130028 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9060b267-3596-4e76-a820-051838b5a5d9-machine-approver-tls\") pod \"machine-approver-54c688565-k2bhq\" (UID: \"9060b267-3596-4e76-a820-051838b5a5d9\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-k2bhq" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.131974 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/38dcb891-7354-413d-ba1d-016f0522c1bb-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-nksz9\" (UID: \"38dcb891-7354-413d-ba1d-016f0522c1bb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-nksz9" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.132394 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/1094e93d-2606-43c0-8b23-334bab811610-serviceca\") pod \"image-pruner-29525760-tv4jx\" (UID: \"1094e93d-2606-43c0-8b23-334bab811610\") " pod="openshift-image-registry/image-pruner-29525760-tv4jx" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.133003 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38c88f45-4bc8-4153-962b-f3449bbb53ad-serving-cert\") pod \"route-controller-manager-776cdc94d6-stc6t\" (UID: \"38c88f45-4bc8-4153-962b-f3449bbb53ad\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-stc6t" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.135605 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.139614 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-ph8fq"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.139791 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-kdptd" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.148341 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-m4t5z"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.148934 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ph8fq" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.152367 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-klv4w"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.154627 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-4ltpk"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.154721 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-m4t5z" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.154740 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-klv4w" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.155222 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.157331 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-gcwx4"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.159595 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-7txlk"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.159785 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.159972 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-gcwx4" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.161872 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-lp56s"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.162047 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.166739 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-wkzt7"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.166861 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.171214 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-lw4ks"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.171351 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-wkzt7" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.179262 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.179691 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-szvd6"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.179724 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-nksz9"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.179737 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rd2fd"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.179761 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-kgrwk"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.179774 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-stc6t"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.179786 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-5tc5g"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.179801 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-8zn62"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.179812 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-26g2b"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.179823 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-c9dcq"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.179843 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29525760-tv4jx"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.179853 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-p7fkg"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.179864 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-x8c42"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.180235 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-lw4ks" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.184016 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-85dkw"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.184283 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-x8c42" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.187888 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-24bsp"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.188072 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-85dkw" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.190631 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-rq6pw"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.190652 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-5wdfs"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.190670 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-zg2vd"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.190680 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-4cqtl"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.190690 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-wrrss"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.190732 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-24bsp" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.193340 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-kcbw2"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.193446 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-wrrss" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.195236 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.198254 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-579ws"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.198280 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tcpmn"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.198290 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-ph8fq"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.198299 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mr4hv"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.198307 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-pv5z4"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.198315 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-4sdjl"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.198322 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-7txlk"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.198330 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-gcwx4"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.198341 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-kv84j"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.198349 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-dp7hb"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.198357 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-sx776"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.198366 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-f44cp"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.198374 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-kdptd"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.198382 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-lw4ks"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.198392 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-b778b"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.198399 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-kcbw2" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.198404 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-klv4w"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.198548 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-kcbw2"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.198561 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-x8c42"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.198571 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-lp56s"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.198579 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525760-gk87r"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.198588 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-24bsp"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.198597 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-wkzt7"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.198605 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-c42nb"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.198615 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-m4t5z"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.198623 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-cvxll"] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.215527 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.230131 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7kj8\" (UniqueName: \"kubernetes.io/projected/da6b7a25-5740-4b62-ab8d-dd83057a3d7a-kube-api-access-l7kj8\") pod \"collect-profiles-29525760-gk87r\" (UID: \"da6b7a25-5740-4b62-ab8d-dd83057a3d7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525760-gk87r" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.230267 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/84d08a6b-ab51-44fb-a216-4a9beb9e9141-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-5wdfs\" (UID: \"84d08a6b-ab51-44fb-a216-4a9beb9e9141\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-5wdfs" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.230543 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2aaa38bf-6f3d-4f28-8634-564d553f87a6-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-f44cp\" (UID: \"2aaa38bf-6f3d-4f28-8634-564d553f87a6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-f44cp" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.230579 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9332ca24-50c6-4625-8e97-a6fd5dd849f3-serving-cert\") pod \"service-ca-operator-5b9c976747-ph8fq\" (UID: \"9332ca24-50c6-4625-8e97-a6fd5dd849f3\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ph8fq" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.230608 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6fe8d69b-d257-4f34-b535-177002797675-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-mr4hv\" (UID: \"6fe8d69b-d257-4f34-b535-177002797675\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mr4hv" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.230631 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da6b7a25-5740-4b62-ab8d-dd83057a3d7a-config-volume\") pod \"collect-profiles-29525760-gk87r\" (UID: \"da6b7a25-5740-4b62-ab8d-dd83057a3d7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525760-gk87r" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.230654 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p72jd\" (UniqueName: \"kubernetes.io/projected/2aaa38bf-6f3d-4f28-8634-564d553f87a6-kube-api-access-p72jd\") pod \"openshift-controller-manager-operator-686468bdd5-f44cp\" (UID: \"2aaa38bf-6f3d-4f28-8634-564d553f87a6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-f44cp" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.230681 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84d08a6b-ab51-44fb-a216-4a9beb9e9141-config\") pod \"kube-controller-manager-operator-69d5f845f8-5wdfs\" (UID: \"84d08a6b-ab51-44fb-a216-4a9beb9e9141\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-5wdfs" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.230708 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6fe8d69b-d257-4f34-b535-177002797675-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-mr4hv\" (UID: \"6fe8d69b-d257-4f34-b535-177002797675\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mr4hv" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.230727 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/933616c4-9cd3-4c88-8863-111d8e2ec32b-config\") pod \"openshift-apiserver-operator-846cbfc458-b778b\" (UID: \"933616c4-9cd3-4c88-8863-111d8e2ec32b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-b778b" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.230750 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bc3238c4-513a-495d-835d-da98864cdb8d-service-ca\") pod \"console-64d44f6ddf-sx776\" (UID: \"bc3238c4-513a-495d-835d-da98864cdb8d\") " pod="openshift-console/console-64d44f6ddf-sx776" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.230773 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lz2nf\" (UniqueName: \"kubernetes.io/projected/b3931e83-9df1-49f4-8f33-5ca09792a062-kube-api-access-lz2nf\") pod \"authentication-operator-7f5c659b84-c9dcq\" (UID: \"b3931e83-9df1-49f4-8f33-5ca09792a062\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c9dcq" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.230794 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zmh95\" (UniqueName: \"kubernetes.io/projected/a0f08f8e-efec-4c49-b5e8-7ac2b96bb5ee-kube-api-access-zmh95\") pod \"service-ca-74545575db-pv5z4\" (UID: \"a0f08f8e-efec-4c49-b5e8-7ac2b96bb5ee\") " pod="openshift-service-ca/service-ca-74545575db-pv5z4" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.230814 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.230848 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6623692b-4959-4045-8da6-f64819b323e9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-tcpmn\" (UID: \"6623692b-4959-4045-8da6-f64819b323e9\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tcpmn" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.230867 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9pkd\" (UniqueName: \"kubernetes.io/projected/bc3238c4-513a-495d-835d-da98864cdb8d-kube-api-access-t9pkd\") pod \"console-64d44f6ddf-sx776\" (UID: \"bc3238c4-513a-495d-835d-da98864cdb8d\") " pod="openshift-console/console-64d44f6ddf-sx776" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.230882 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/79076589-1a80-4683-a090-5aa445b6eba8-tmpfs\") pod \"packageserver-7d4fc7d867-dp7hb\" (UID: \"79076589-1a80-4683-a090-5aa445b6eba8\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-dp7hb" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.230908 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/20c92049-0ab0-4940-8a29-851dfe180b34-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-4sdjl\" (UID: \"20c92049-0ab0-4940-8a29-851dfe180b34\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-4sdjl" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.230954 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n72jk\" (UniqueName: \"kubernetes.io/projected/1d9c615f-40e1-433c-9607-fbd841b62901-kube-api-access-n72jk\") pod \"olm-operator-5cdf44d969-c42nb\" (UID: \"1d9c615f-40e1-433c-9607-fbd841b62901\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-c42nb" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.231006 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9833f13d-3814-43ad-afef-381d884e5950-tmp\") pod \"marketplace-operator-547dbd544d-cvxll\" (UID: \"9833f13d-3814-43ad-afef-381d884e5950\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-cvxll" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.231025 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9833f13d-3814-43ad-afef-381d884e5950-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-cvxll\" (UID: \"9833f13d-3814-43ad-afef-381d884e5950\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-cvxll" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.231045 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1d9c615f-40e1-433c-9607-fbd841b62901-srv-cert\") pod \"olm-operator-5cdf44d969-c42nb\" (UID: \"1d9c615f-40e1-433c-9607-fbd841b62901\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-c42nb" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.231069 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.231193 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dccf73a9-9c9e-4211-8b18-ed7d205bf9d1-serving-cert\") pod \"console-operator-67c89758df-rq6pw\" (UID: \"dccf73a9-9c9e-4211-8b18-ed7d205bf9d1\") " pod="openshift-console-operator/console-operator-67c89758df-rq6pw" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.231212 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67jwx\" (UniqueName: \"kubernetes.io/projected/a78d5238-801b-4521-91d2-6b9bed68d61e-kube-api-access-67jwx\") pod \"machine-config-controller-f9cdd68f7-m4t5z\" (UID: \"a78d5238-801b-4521-91d2-6b9bed68d61e\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-m4t5z" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.231230 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bc3238c4-513a-495d-835d-da98864cdb8d-console-config\") pod \"console-64d44f6ddf-sx776\" (UID: \"bc3238c4-513a-495d-835d-da98864cdb8d\") " pod="openshift-console/console-64d44f6ddf-sx776" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.231248 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f052856a-22ab-4525-92dc-3baef7ed956a-tmp-dir\") pod \"dns-operator-799b87ffcd-kv84j\" (UID: \"f052856a-22ab-4525-92dc-3baef7ed956a\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-kv84j" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.231272 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdxx4\" (UniqueName: \"kubernetes.io/projected/933616c4-9cd3-4c88-8863-111d8e2ec32b-kube-api-access-vdxx4\") pod \"openshift-apiserver-operator-846cbfc458-b778b\" (UID: \"933616c4-9cd3-4c88-8863-111d8e2ec32b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-b778b" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.231300 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3931e83-9df1-49f4-8f33-5ca09792a062-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-c9dcq\" (UID: \"b3931e83-9df1-49f4-8f33-5ca09792a062\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c9dcq" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.231316 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frcsk\" (UniqueName: \"kubernetes.io/projected/9833f13d-3814-43ad-afef-381d884e5950-kube-api-access-frcsk\") pod \"marketplace-operator-547dbd544d-cvxll\" (UID: \"9833f13d-3814-43ad-afef-381d884e5950\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-cvxll" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.231350 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nwkzk\" (UniqueName: \"kubernetes.io/projected/6623692b-4959-4045-8da6-f64819b323e9-kube-api-access-nwkzk\") pod \"control-plane-machine-set-operator-75ffdb6fcd-tcpmn\" (UID: \"6623692b-4959-4045-8da6-f64819b323e9\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tcpmn" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.231368 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84d08a6b-ab51-44fb-a216-4a9beb9e9141-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-5wdfs\" (UID: \"84d08a6b-ab51-44fb-a216-4a9beb9e9141\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-5wdfs" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.231384 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bc3238c4-513a-495d-835d-da98864cdb8d-console-serving-cert\") pod \"console-64d44f6ddf-sx776\" (UID: \"bc3238c4-513a-495d-835d-da98864cdb8d\") " pod="openshift-console/console-64d44f6ddf-sx776" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.231398 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/79076589-1a80-4683-a090-5aa445b6eba8-apiservice-cert\") pod \"packageserver-7d4fc7d867-dp7hb\" (UID: \"79076589-1a80-4683-a090-5aa445b6eba8\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-dp7hb" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.231425 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3931e83-9df1-49f4-8f33-5ca09792a062-serving-cert\") pod \"authentication-operator-7f5c659b84-c9dcq\" (UID: \"b3931e83-9df1-49f4-8f33-5ca09792a062\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c9dcq" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.231444 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55527a1c-71b2-4254-82ac-da17df407862-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-kdptd\" (UID: \"55527a1c-71b2-4254-82ac-da17df407862\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-kdptd" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.231466 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/88f0f58c-020c-4ce1-9f3d-3aa63ae92ddd-images\") pod \"machine-config-operator-67c9d58cbb-579ws\" (UID: \"88f0f58c-020c-4ce1-9f3d-3aa63ae92ddd\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-579ws" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.231486 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dccf73a9-9c9e-4211-8b18-ed7d205bf9d1-trusted-ca\") pod \"console-operator-67c89758df-rq6pw\" (UID: \"dccf73a9-9c9e-4211-8b18-ed7d205bf9d1\") " pod="openshift-console-operator/console-operator-67c89758df-rq6pw" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.231513 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84d08a6b-ab51-44fb-a216-4a9beb9e9141-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-5wdfs\" (UID: \"84d08a6b-ab51-44fb-a216-4a9beb9e9141\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-5wdfs" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.231530 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55527a1c-71b2-4254-82ac-da17df407862-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-kdptd\" (UID: \"55527a1c-71b2-4254-82ac-da17df407862\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-kdptd" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.231711 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/84d08a6b-ab51-44fb-a216-4a9beb9e9141-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-5wdfs\" (UID: \"84d08a6b-ab51-44fb-a216-4a9beb9e9141\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-5wdfs" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.232091 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84d08a6b-ab51-44fb-a216-4a9beb9e9141-config\") pod \"kube-controller-manager-operator-69d5f845f8-5wdfs\" (UID: \"84d08a6b-ab51-44fb-a216-4a9beb9e9141\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-5wdfs" Feb 20 00:10:18 crc kubenswrapper[5107]: E0220 00:10:18.232105 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 20 00:10:18 crc kubenswrapper[5107]: E0220 00:10:18.232125 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.232126 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/88f0f58c-020c-4ce1-9f3d-3aa63ae92ddd-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-579ws\" (UID: \"88f0f58c-020c-4ce1-9f3d-3aa63ae92ddd\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-579ws" Feb 20 00:10:18 crc kubenswrapper[5107]: E0220 00:10:18.232137 5107 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.232165 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bc3238c4-513a-495d-835d-da98864cdb8d-console-oauth-config\") pod \"console-64d44f6ddf-sx776\" (UID: \"bc3238c4-513a-495d-835d-da98864cdb8d\") " pod="openshift-console/console-64d44f6ddf-sx776" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.232196 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bjdjb\" (UniqueName: \"kubernetes.io/projected/dccf73a9-9c9e-4211-8b18-ed7d205bf9d1-kube-api-access-bjdjb\") pod \"console-operator-67c89758df-rq6pw\" (UID: \"dccf73a9-9c9e-4211-8b18-ed7d205bf9d1\") " pod="openshift-console-operator/console-operator-67c89758df-rq6pw" Feb 20 00:10:18 crc kubenswrapper[5107]: E0220 00:10:18.232215 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-20 00:10:34.232201406 +0000 UTC m=+120.600858972 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.232243 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/88f0f58c-020c-4ce1-9f3d-3aa63ae92ddd-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-579ws\" (UID: \"88f0f58c-020c-4ce1-9f3d-3aa63ae92ddd\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-579ws" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.232274 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2aaa38bf-6f3d-4f28-8634-564d553f87a6-config\") pod \"openshift-controller-manager-operator-686468bdd5-f44cp\" (UID: \"2aaa38bf-6f3d-4f28-8634-564d553f87a6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-f44cp" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.232294 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fswbl\" (UniqueName: \"kubernetes.io/projected/79076589-1a80-4683-a090-5aa445b6eba8-kube-api-access-fswbl\") pod \"packageserver-7d4fc7d867-dp7hb\" (UID: \"79076589-1a80-4683-a090-5aa445b6eba8\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-dp7hb" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.232316 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pcgt9\" (UniqueName: \"kubernetes.io/projected/6fe8d69b-d257-4f34-b535-177002797675-kube-api-access-pcgt9\") pod \"ingress-operator-6b9cb4dbcf-mr4hv\" (UID: \"6fe8d69b-d257-4f34-b535-177002797675\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mr4hv" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.232332 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/933616c4-9cd3-4c88-8863-111d8e2ec32b-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-b778b\" (UID: \"933616c4-9cd3-4c88-8863-111d8e2ec32b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-b778b" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.232352 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a0f08f8e-efec-4c49-b5e8-7ac2b96bb5ee-signing-key\") pod \"service-ca-74545575db-pv5z4\" (UID: \"a0f08f8e-efec-4c49-b5e8-7ac2b96bb5ee\") " pod="openshift-service-ca/service-ca-74545575db-pv5z4" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.232369 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a0f08f8e-efec-4c49-b5e8-7ac2b96bb5ee-signing-cabundle\") pod \"service-ca-74545575db-pv5z4\" (UID: \"a0f08f8e-efec-4c49-b5e8-7ac2b96bb5ee\") " pod="openshift-service-ca/service-ca-74545575db-pv5z4" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.232386 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvh8q\" (UniqueName: \"kubernetes.io/projected/9332ca24-50c6-4625-8e97-a6fd5dd849f3-kube-api-access-wvh8q\") pod \"service-ca-operator-5b9c976747-ph8fq\" (UID: \"9332ca24-50c6-4625-8e97-a6fd5dd849f3\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ph8fq" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.232413 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-894d9\" (UniqueName: \"kubernetes.io/projected/b78a9450-1db5-496b-a8e7-2c12d8e5525f-kube-api-access-894d9\") pod \"multus-admission-controller-69db94689b-zg2vd\" (UID: \"b78a9450-1db5-496b-a8e7-2c12d8e5525f\") " pod="openshift-multus/multus-admission-controller-69db94689b-zg2vd" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.232429 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0875e0e-3239-42ce-b8d5-aecba1e04f68-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-26g2b\" (UID: \"e0875e0e-3239-42ce-b8d5-aecba1e04f68\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-26g2b" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.232446 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0875e0e-3239-42ce-b8d5-aecba1e04f68-config\") pod \"kube-storage-version-migrator-operator-565b79b866-26g2b\" (UID: \"e0875e0e-3239-42ce-b8d5-aecba1e04f68\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-26g2b" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.232469 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.232486 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b78a9450-1db5-496b-a8e7-2c12d8e5525f-webhook-certs\") pod \"multus-admission-controller-69db94689b-zg2vd\" (UID: \"b78a9450-1db5-496b-a8e7-2c12d8e5525f\") " pod="openshift-multus/multus-admission-controller-69db94689b-zg2vd" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.232502 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxdkv\" (UniqueName: \"kubernetes.io/projected/20c92049-0ab0-4940-8a29-851dfe180b34-kube-api-access-xxdkv\") pod \"package-server-manager-77f986bd66-4sdjl\" (UID: \"20c92049-0ab0-4940-8a29-851dfe180b34\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-4sdjl" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.232514 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6fe8d69b-d257-4f34-b535-177002797675-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-mr4hv\" (UID: \"6fe8d69b-d257-4f34-b535-177002797675\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mr4hv" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.232521 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1d9c615f-40e1-433c-9607-fbd841b62901-profile-collector-cert\") pod \"olm-operator-5cdf44d969-c42nb\" (UID: \"1d9c615f-40e1-433c-9607-fbd841b62901\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-c42nb" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.232576 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9332ca24-50c6-4625-8e97-a6fd5dd849f3-config\") pod \"service-ca-operator-5b9c976747-ph8fq\" (UID: \"9332ca24-50c6-4625-8e97-a6fd5dd849f3\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ph8fq" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.232623 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6fe8d69b-d257-4f34-b535-177002797675-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-mr4hv\" (UID: \"6fe8d69b-d257-4f34-b535-177002797675\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mr4hv" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.232648 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a78d5238-801b-4521-91d2-6b9bed68d61e-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-m4t5z\" (UID: \"a78d5238-801b-4521-91d2-6b9bed68d61e\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-m4t5z" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.232676 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da6b7a25-5740-4b62-ab8d-dd83057a3d7a-secret-volume\") pod \"collect-profiles-29525760-gk87r\" (UID: \"da6b7a25-5740-4b62-ab8d-dd83057a3d7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525760-gk87r" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.232702 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/88f0f58c-020c-4ce1-9f3d-3aa63ae92ddd-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-579ws\" (UID: \"88f0f58c-020c-4ce1-9f3d-3aa63ae92ddd\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-579ws" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.232708 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vqsnk\" (UniqueName: \"kubernetes.io/projected/e0875e0e-3239-42ce-b8d5-aecba1e04f68-kube-api-access-vqsnk\") pod \"kube-storage-version-migrator-operator-565b79b866-26g2b\" (UID: \"e0875e0e-3239-42ce-b8d5-aecba1e04f68\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-26g2b" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.232786 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3931e83-9df1-49f4-8f33-5ca09792a062-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-c9dcq\" (UID: \"b3931e83-9df1-49f4-8f33-5ca09792a062\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c9dcq" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.232957 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dccf73a9-9c9e-4211-8b18-ed7d205bf9d1-trusted-ca\") pod \"console-operator-67c89758df-rq6pw\" (UID: \"dccf73a9-9c9e-4211-8b18-ed7d205bf9d1\") " pod="openshift-console-operator/console-operator-67c89758df-rq6pw" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.233212 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/88f0f58c-020c-4ce1-9f3d-3aa63ae92ddd-images\") pod \"machine-config-operator-67c9d58cbb-579ws\" (UID: \"88f0f58c-020c-4ce1-9f3d-3aa63ae92ddd\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-579ws" Feb 20 00:10:18 crc kubenswrapper[5107]: E0220 00:10:18.233237 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 20 00:10:18 crc kubenswrapper[5107]: E0220 00:10:18.233266 5107 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 20 00:10:18 crc kubenswrapper[5107]: E0220 00:10:18.233283 5107 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:10:18 crc kubenswrapper[5107]: E0220 00:10:18.233359 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-20 00:10:34.233337687 +0000 UTC m=+120.601995273 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.233630 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc3238c4-513a-495d-835d-da98864cdb8d-trusted-ca-bundle\") pod \"console-64d44f6ddf-sx776\" (UID: \"bc3238c4-513a-495d-835d-da98864cdb8d\") " pod="openshift-console/console-64d44f6ddf-sx776" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.233667 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9833f13d-3814-43ad-afef-381d884e5950-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-cvxll\" (UID: \"9833f13d-3814-43ad-afef-381d884e5950\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-cvxll" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.233700 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wb27m\" (UniqueName: \"kubernetes.io/projected/88f0f58c-020c-4ce1-9f3d-3aa63ae92ddd-kube-api-access-wb27m\") pod \"machine-config-operator-67c9d58cbb-579ws\" (UID: \"88f0f58c-020c-4ce1-9f3d-3aa63ae92ddd\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-579ws" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.233726 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8zvb9\" (UniqueName: \"kubernetes.io/projected/cfdfef0c-4111-4a89-aa5a-bf317fc4a772-kube-api-access-8zvb9\") pod \"downloads-747b44746d-p7fkg\" (UID: \"cfdfef0c-4111-4a89-aa5a-bf317fc4a772\") " pod="openshift-console/downloads-747b44746d-p7fkg" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.233749 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1d9c615f-40e1-433c-9607-fbd841b62901-tmpfs\") pod \"olm-operator-5cdf44d969-c42nb\" (UID: \"1d9c615f-40e1-433c-9607-fbd841b62901\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-c42nb" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.233777 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3931e83-9df1-49f4-8f33-5ca09792a062-config\") pod \"authentication-operator-7f5c659b84-c9dcq\" (UID: \"b3931e83-9df1-49f4-8f33-5ca09792a062\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c9dcq" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.233801 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dccf73a9-9c9e-4211-8b18-ed7d205bf9d1-config\") pod \"console-operator-67c89758df-rq6pw\" (UID: \"dccf73a9-9c9e-4211-8b18-ed7d205bf9d1\") " pod="openshift-console-operator/console-operator-67c89758df-rq6pw" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.233829 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2aaa38bf-6f3d-4f28-8634-564d553f87a6-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-f44cp\" (UID: \"2aaa38bf-6f3d-4f28-8634-564d553f87a6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-f44cp" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.233850 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f052856a-22ab-4525-92dc-3baef7ed956a-metrics-tls\") pod \"dns-operator-799b87ffcd-kv84j\" (UID: \"f052856a-22ab-4525-92dc-3baef7ed956a\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-kv84j" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.233875 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/55527a1c-71b2-4254-82ac-da17df407862-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-kdptd\" (UID: \"55527a1c-71b2-4254-82ac-da17df407862\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-kdptd" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.233900 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bc3238c4-513a-495d-835d-da98864cdb8d-oauth-serving-cert\") pod \"console-64d44f6ddf-sx776\" (UID: \"bc3238c4-513a-495d-835d-da98864cdb8d\") " pod="openshift-console/console-64d44f6ddf-sx776" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.233927 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27x2r\" (UniqueName: \"kubernetes.io/projected/f052856a-22ab-4525-92dc-3baef7ed956a-kube-api-access-27x2r\") pod \"dns-operator-799b87ffcd-kv84j\" (UID: \"f052856a-22ab-4525-92dc-3baef7ed956a\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-kv84j" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.233952 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55527a1c-71b2-4254-82ac-da17df407862-config\") pod \"openshift-kube-scheduler-operator-54f497555d-kdptd\" (UID: \"55527a1c-71b2-4254-82ac-da17df407862\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-kdptd" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.234000 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3931e83-9df1-49f4-8f33-5ca09792a062-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-c9dcq\" (UID: \"b3931e83-9df1-49f4-8f33-5ca09792a062\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c9dcq" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.234025 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a78d5238-801b-4521-91d2-6b9bed68d61e-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-m4t5z\" (UID: \"a78d5238-801b-4521-91d2-6b9bed68d61e\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-m4t5z" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.234058 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/79076589-1a80-4683-a090-5aa445b6eba8-webhook-cert\") pod \"packageserver-7d4fc7d867-dp7hb\" (UID: \"79076589-1a80-4683-a090-5aa445b6eba8\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-dp7hb" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.234086 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:10:18 crc kubenswrapper[5107]: E0220 00:10:18.234259 5107 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 20 00:10:18 crc kubenswrapper[5107]: E0220 00:10:18.234306 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-20 00:10:34.234294404 +0000 UTC m=+120.602952050 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.235019 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3931e83-9df1-49f4-8f33-5ca09792a062-config\") pod \"authentication-operator-7f5c659b84-c9dcq\" (UID: \"b3931e83-9df1-49f4-8f33-5ca09792a062\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c9dcq" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.235057 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2aaa38bf-6f3d-4f28-8634-564d553f87a6-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-f44cp\" (UID: \"2aaa38bf-6f3d-4f28-8634-564d553f87a6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-f44cp" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.235683 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a0f08f8e-efec-4c49-b5e8-7ac2b96bb5ee-signing-cabundle\") pod \"service-ca-74545575db-pv5z4\" (UID: \"a0f08f8e-efec-4c49-b5e8-7ac2b96bb5ee\") " pod="openshift-service-ca/service-ca-74545575db-pv5z4" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.237078 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6623692b-4959-4045-8da6-f64819b323e9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-tcpmn\" (UID: \"6623692b-4959-4045-8da6-f64819b323e9\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tcpmn" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.237088 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6fe8d69b-d257-4f34-b535-177002797675-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-mr4hv\" (UID: \"6fe8d69b-d257-4f34-b535-177002797675\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mr4hv" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.237866 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.238061 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2aaa38bf-6f3d-4f28-8634-564d553f87a6-config\") pod \"openshift-controller-manager-operator-686468bdd5-f44cp\" (UID: \"2aaa38bf-6f3d-4f28-8634-564d553f87a6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-f44cp" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.238442 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3931e83-9df1-49f4-8f33-5ca09792a062-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-c9dcq\" (UID: \"b3931e83-9df1-49f4-8f33-5ca09792a062\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c9dcq" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.238461 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3931e83-9df1-49f4-8f33-5ca09792a062-serving-cert\") pod \"authentication-operator-7f5c659b84-c9dcq\" (UID: \"b3931e83-9df1-49f4-8f33-5ca09792a062\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c9dcq" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.238555 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0875e0e-3239-42ce-b8d5-aecba1e04f68-config\") pod \"kube-storage-version-migrator-operator-565b79b866-26g2b\" (UID: \"e0875e0e-3239-42ce-b8d5-aecba1e04f68\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-26g2b" Feb 20 00:10:18 crc kubenswrapper[5107]: E0220 00:10:18.238572 5107 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.238814 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84d08a6b-ab51-44fb-a216-4a9beb9e9141-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-5wdfs\" (UID: \"84d08a6b-ab51-44fb-a216-4a9beb9e9141\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-5wdfs" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.238822 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/88f0f58c-020c-4ce1-9f3d-3aa63ae92ddd-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-579ws\" (UID: \"88f0f58c-020c-4ce1-9f3d-3aa63ae92ddd\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-579ws" Feb 20 00:10:18 crc kubenswrapper[5107]: E0220 00:10:18.238834 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-20 00:10:34.238767928 +0000 UTC m=+120.607425494 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.241334 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2aaa38bf-6f3d-4f28-8634-564d553f87a6-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-f44cp\" (UID: \"2aaa38bf-6f3d-4f28-8634-564d553f87a6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-f44cp" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.242368 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dccf73a9-9c9e-4211-8b18-ed7d205bf9d1-serving-cert\") pod \"console-operator-67c89758df-rq6pw\" (UID: \"dccf73a9-9c9e-4211-8b18-ed7d205bf9d1\") " pod="openshift-console-operator/console-operator-67c89758df-rq6pw" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.242629 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0875e0e-3239-42ce-b8d5-aecba1e04f68-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-26g2b\" (UID: \"e0875e0e-3239-42ce-b8d5-aecba1e04f68\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-26g2b" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.242650 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b78a9450-1db5-496b-a8e7-2c12d8e5525f-webhook-certs\") pod \"multus-admission-controller-69db94689b-zg2vd\" (UID: \"b78a9450-1db5-496b-a8e7-2c12d8e5525f\") " pod="openshift-multus/multus-admission-controller-69db94689b-zg2vd" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.243482 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dccf73a9-9c9e-4211-8b18-ed7d205bf9d1-config\") pod \"console-operator-67c89758df-rq6pw\" (UID: \"dccf73a9-9c9e-4211-8b18-ed7d205bf9d1\") " pod="openshift-console-operator/console-operator-67c89758df-rq6pw" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.246230 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a0f08f8e-efec-4c49-b5e8-7ac2b96bb5ee-signing-key\") pod \"service-ca-74545575db-pv5z4\" (UID: \"a0f08f8e-efec-4c49-b5e8-7ac2b96bb5ee\") " pod="openshift-service-ca/service-ca-74545575db-pv5z4" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.255235 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.275814 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.296357 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.315553 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.335274 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:18 crc kubenswrapper[5107]: E0220 00:10:18.335429 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:34.335402414 +0000 UTC m=+120.704059980 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.335651 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc3238c4-513a-495d-835d-da98864cdb8d-trusted-ca-bundle\") pod \"console-64d44f6ddf-sx776\" (UID: \"bc3238c4-513a-495d-835d-da98864cdb8d\") " pod="openshift-console/console-64d44f6ddf-sx776" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.335680 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9833f13d-3814-43ad-afef-381d884e5950-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-cvxll\" (UID: \"9833f13d-3814-43ad-afef-381d884e5950\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-cvxll" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.335708 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1d9c615f-40e1-433c-9607-fbd841b62901-tmpfs\") pod \"olm-operator-5cdf44d969-c42nb\" (UID: \"1d9c615f-40e1-433c-9607-fbd841b62901\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-c42nb" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.335748 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f052856a-22ab-4525-92dc-3baef7ed956a-metrics-tls\") pod \"dns-operator-799b87ffcd-kv84j\" (UID: \"f052856a-22ab-4525-92dc-3baef7ed956a\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-kv84j" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.335770 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/55527a1c-71b2-4254-82ac-da17df407862-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-kdptd\" (UID: \"55527a1c-71b2-4254-82ac-da17df407862\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-kdptd" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.335793 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bc3238c4-513a-495d-835d-da98864cdb8d-oauth-serving-cert\") pod \"console-64d44f6ddf-sx776\" (UID: \"bc3238c4-513a-495d-835d-da98864cdb8d\") " pod="openshift-console/console-64d44f6ddf-sx776" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.335818 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-27x2r\" (UniqueName: \"kubernetes.io/projected/f052856a-22ab-4525-92dc-3baef7ed956a-kube-api-access-27x2r\") pod \"dns-operator-799b87ffcd-kv84j\" (UID: \"f052856a-22ab-4525-92dc-3baef7ed956a\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-kv84j" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.335838 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55527a1c-71b2-4254-82ac-da17df407862-config\") pod \"openshift-kube-scheduler-operator-54f497555d-kdptd\" (UID: \"55527a1c-71b2-4254-82ac-da17df407862\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-kdptd" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.336339 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a78d5238-801b-4521-91d2-6b9bed68d61e-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-m4t5z\" (UID: \"a78d5238-801b-4521-91d2-6b9bed68d61e\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-m4t5z" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.336384 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/79076589-1a80-4683-a090-5aa445b6eba8-webhook-cert\") pod \"packageserver-7d4fc7d867-dp7hb\" (UID: \"79076589-1a80-4683-a090-5aa445b6eba8\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-dp7hb" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.336419 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7kj8\" (UniqueName: \"kubernetes.io/projected/da6b7a25-5740-4b62-ab8d-dd83057a3d7a-kube-api-access-l7kj8\") pod \"collect-profiles-29525760-gk87r\" (UID: \"da6b7a25-5740-4b62-ab8d-dd83057a3d7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525760-gk87r" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.336448 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9332ca24-50c6-4625-8e97-a6fd5dd849f3-serving-cert\") pod \"service-ca-operator-5b9c976747-ph8fq\" (UID: \"9332ca24-50c6-4625-8e97-a6fd5dd849f3\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ph8fq" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.336478 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da6b7a25-5740-4b62-ab8d-dd83057a3d7a-config-volume\") pod \"collect-profiles-29525760-gk87r\" (UID: \"da6b7a25-5740-4b62-ab8d-dd83057a3d7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525760-gk87r" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.336534 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/933616c4-9cd3-4c88-8863-111d8e2ec32b-config\") pod \"openshift-apiserver-operator-846cbfc458-b778b\" (UID: \"933616c4-9cd3-4c88-8863-111d8e2ec32b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-b778b" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.336556 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bc3238c4-513a-495d-835d-da98864cdb8d-service-ca\") pod \"console-64d44f6ddf-sx776\" (UID: \"bc3238c4-513a-495d-835d-da98864cdb8d\") " pod="openshift-console/console-64d44f6ddf-sx776" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.336593 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t9pkd\" (UniqueName: \"kubernetes.io/projected/bc3238c4-513a-495d-835d-da98864cdb8d-kube-api-access-t9pkd\") pod \"console-64d44f6ddf-sx776\" (UID: \"bc3238c4-513a-495d-835d-da98864cdb8d\") " pod="openshift-console/console-64d44f6ddf-sx776" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.336610 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/79076589-1a80-4683-a090-5aa445b6eba8-tmpfs\") pod \"packageserver-7d4fc7d867-dp7hb\" (UID: \"79076589-1a80-4683-a090-5aa445b6eba8\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-dp7hb" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.336604 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/55527a1c-71b2-4254-82ac-da17df407862-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-kdptd\" (UID: \"55527a1c-71b2-4254-82ac-da17df407862\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-kdptd" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.336635 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/20c92049-0ab0-4940-8a29-851dfe180b34-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-4sdjl\" (UID: \"20c92049-0ab0-4940-8a29-851dfe180b34\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-4sdjl" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.336657 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n72jk\" (UniqueName: \"kubernetes.io/projected/1d9c615f-40e1-433c-9607-fbd841b62901-kube-api-access-n72jk\") pod \"olm-operator-5cdf44d969-c42nb\" (UID: \"1d9c615f-40e1-433c-9607-fbd841b62901\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-c42nb" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.336682 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9833f13d-3814-43ad-afef-381d884e5950-tmp\") pod \"marketplace-operator-547dbd544d-cvxll\" (UID: \"9833f13d-3814-43ad-afef-381d884e5950\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-cvxll" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.336699 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9833f13d-3814-43ad-afef-381d884e5950-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-cvxll\" (UID: \"9833f13d-3814-43ad-afef-381d884e5950\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-cvxll" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.336717 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1d9c615f-40e1-433c-9607-fbd841b62901-srv-cert\") pod \"olm-operator-5cdf44d969-c42nb\" (UID: \"1d9c615f-40e1-433c-9607-fbd841b62901\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-c42nb" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.336757 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-67jwx\" (UniqueName: \"kubernetes.io/projected/a78d5238-801b-4521-91d2-6b9bed68d61e-kube-api-access-67jwx\") pod \"machine-config-controller-f9cdd68f7-m4t5z\" (UID: \"a78d5238-801b-4521-91d2-6b9bed68d61e\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-m4t5z" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.336775 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bc3238c4-513a-495d-835d-da98864cdb8d-console-config\") pod \"console-64d44f6ddf-sx776\" (UID: \"bc3238c4-513a-495d-835d-da98864cdb8d\") " pod="openshift-console/console-64d44f6ddf-sx776" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.336790 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f052856a-22ab-4525-92dc-3baef7ed956a-tmp-dir\") pod \"dns-operator-799b87ffcd-kv84j\" (UID: \"f052856a-22ab-4525-92dc-3baef7ed956a\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-kv84j" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.336807 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cee716c2-1a9a-4944-9b9f-06284973b167-metrics-certs\") pod \"network-metrics-daemon-j2l2p\" (UID: \"cee716c2-1a9a-4944-9b9f-06284973b167\") " pod="openshift-multus/network-metrics-daemon-j2l2p" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.336827 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ec63a06b-520e-47da-b761-74cc3462ebd7-cert\") pod \"ingress-canary-kcbw2\" (UID: \"ec63a06b-520e-47da-b761-74cc3462ebd7\") " pod="openshift-ingress-canary/ingress-canary-kcbw2" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.336844 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vdxx4\" (UniqueName: \"kubernetes.io/projected/933616c4-9cd3-4c88-8863-111d8e2ec32b-kube-api-access-vdxx4\") pod \"openshift-apiserver-operator-846cbfc458-b778b\" (UID: \"933616c4-9cd3-4c88-8863-111d8e2ec32b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-b778b" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.336866 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-frcsk\" (UniqueName: \"kubernetes.io/projected/9833f13d-3814-43ad-afef-381d884e5950-kube-api-access-frcsk\") pod \"marketplace-operator-547dbd544d-cvxll\" (UID: \"9833f13d-3814-43ad-afef-381d884e5950\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-cvxll" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.336884 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bc3238c4-513a-495d-835d-da98864cdb8d-console-serving-cert\") pod \"console-64d44f6ddf-sx776\" (UID: \"bc3238c4-513a-495d-835d-da98864cdb8d\") " pod="openshift-console/console-64d44f6ddf-sx776" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.336899 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/79076589-1a80-4683-a090-5aa445b6eba8-apiservice-cert\") pod \"packageserver-7d4fc7d867-dp7hb\" (UID: \"79076589-1a80-4683-a090-5aa445b6eba8\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-dp7hb" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.336919 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55527a1c-71b2-4254-82ac-da17df407862-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-kdptd\" (UID: \"55527a1c-71b2-4254-82ac-da17df407862\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-kdptd" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.336961 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55527a1c-71b2-4254-82ac-da17df407862-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-kdptd\" (UID: \"55527a1c-71b2-4254-82ac-da17df407862\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-kdptd" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.336978 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bc3238c4-513a-495d-835d-da98864cdb8d-console-oauth-config\") pod \"console-64d44f6ddf-sx776\" (UID: \"bc3238c4-513a-495d-835d-da98864cdb8d\") " pod="openshift-console/console-64d44f6ddf-sx776" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.337006 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fswbl\" (UniqueName: \"kubernetes.io/projected/79076589-1a80-4683-a090-5aa445b6eba8-kube-api-access-fswbl\") pod \"packageserver-7d4fc7d867-dp7hb\" (UID: \"79076589-1a80-4683-a090-5aa445b6eba8\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-dp7hb" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.337025 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/933616c4-9cd3-4c88-8863-111d8e2ec32b-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-b778b\" (UID: \"933616c4-9cd3-4c88-8863-111d8e2ec32b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-b778b" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.337048 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wvh8q\" (UniqueName: \"kubernetes.io/projected/9332ca24-50c6-4625-8e97-a6fd5dd849f3-kube-api-access-wvh8q\") pod \"service-ca-operator-5b9c976747-ph8fq\" (UID: \"9332ca24-50c6-4625-8e97-a6fd5dd849f3\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ph8fq" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.337064 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp8rx\" (UniqueName: \"kubernetes.io/projected/ec63a06b-520e-47da-b761-74cc3462ebd7-kube-api-access-xp8rx\") pod \"ingress-canary-kcbw2\" (UID: \"ec63a06b-520e-47da-b761-74cc3462ebd7\") " pod="openshift-ingress-canary/ingress-canary-kcbw2" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.337099 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bc3238c4-513a-495d-835d-da98864cdb8d-oauth-serving-cert\") pod \"console-64d44f6ddf-sx776\" (UID: \"bc3238c4-513a-495d-835d-da98864cdb8d\") " pod="openshift-console/console-64d44f6ddf-sx776" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.337102 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xxdkv\" (UniqueName: \"kubernetes.io/projected/20c92049-0ab0-4940-8a29-851dfe180b34-kube-api-access-xxdkv\") pod \"package-server-manager-77f986bd66-4sdjl\" (UID: \"20c92049-0ab0-4940-8a29-851dfe180b34\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-4sdjl" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.337175 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1d9c615f-40e1-433c-9607-fbd841b62901-profile-collector-cert\") pod \"olm-operator-5cdf44d969-c42nb\" (UID: \"1d9c615f-40e1-433c-9607-fbd841b62901\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-c42nb" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.337209 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9332ca24-50c6-4625-8e97-a6fd5dd849f3-config\") pod \"service-ca-operator-5b9c976747-ph8fq\" (UID: \"9332ca24-50c6-4625-8e97-a6fd5dd849f3\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ph8fq" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.337239 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a78d5238-801b-4521-91d2-6b9bed68d61e-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-m4t5z\" (UID: \"a78d5238-801b-4521-91d2-6b9bed68d61e\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-m4t5z" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.337261 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da6b7a25-5740-4b62-ab8d-dd83057a3d7a-secret-volume\") pod \"collect-profiles-29525760-gk87r\" (UID: \"da6b7a25-5740-4b62-ab8d-dd83057a3d7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525760-gk87r" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.337777 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a78d5238-801b-4521-91d2-6b9bed68d61e-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-m4t5z\" (UID: \"a78d5238-801b-4521-91d2-6b9bed68d61e\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-m4t5z" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.338027 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f052856a-22ab-4525-92dc-3baef7ed956a-tmp-dir\") pod \"dns-operator-799b87ffcd-kv84j\" (UID: \"f052856a-22ab-4525-92dc-3baef7ed956a\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-kv84j" Feb 20 00:10:18 crc kubenswrapper[5107]: E0220 00:10:18.338081 5107 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 00:10:18 crc kubenswrapper[5107]: E0220 00:10:18.338121 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cee716c2-1a9a-4944-9b9f-06284973b167-metrics-certs podName:cee716c2-1a9a-4944-9b9f-06284973b167 nodeName:}" failed. No retries permitted until 2026-02-20 00:10:34.338108269 +0000 UTC m=+120.706765835 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cee716c2-1a9a-4944-9b9f-06284973b167-metrics-certs") pod "network-metrics-daemon-j2l2p" (UID: "cee716c2-1a9a-4944-9b9f-06284973b167") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.338825 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bc3238c4-513a-495d-835d-da98864cdb8d-console-config\") pod \"console-64d44f6ddf-sx776\" (UID: \"bc3238c4-513a-495d-835d-da98864cdb8d\") " pod="openshift-console/console-64d44f6ddf-sx776" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.339153 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/79076589-1a80-4683-a090-5aa445b6eba8-tmpfs\") pod \"packageserver-7d4fc7d867-dp7hb\" (UID: \"79076589-1a80-4683-a090-5aa445b6eba8\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-dp7hb" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.339567 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9833f13d-3814-43ad-afef-381d884e5950-tmp\") pod \"marketplace-operator-547dbd544d-cvxll\" (UID: \"9833f13d-3814-43ad-afef-381d884e5950\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-cvxll" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.339919 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bc3238c4-513a-495d-835d-da98864cdb8d-service-ca\") pod \"console-64d44f6ddf-sx776\" (UID: \"bc3238c4-513a-495d-835d-da98864cdb8d\") " pod="openshift-console/console-64d44f6ddf-sx776" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.340083 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1d9c615f-40e1-433c-9607-fbd841b62901-tmpfs\") pod \"olm-operator-5cdf44d969-c42nb\" (UID: \"1d9c615f-40e1-433c-9607-fbd841b62901\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-c42nb" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.341380 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc3238c4-513a-495d-835d-da98864cdb8d-trusted-ca-bundle\") pod \"console-64d44f6ddf-sx776\" (UID: \"bc3238c4-513a-495d-835d-da98864cdb8d\") " pod="openshift-console/console-64d44f6ddf-sx776" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.342345 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bc3238c4-513a-495d-835d-da98864cdb8d-console-serving-cert\") pod \"console-64d44f6ddf-sx776\" (UID: \"bc3238c4-513a-495d-835d-da98864cdb8d\") " pod="openshift-console/console-64d44f6ddf-sx776" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.344017 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/79076589-1a80-4683-a090-5aa445b6eba8-webhook-cert\") pod \"packageserver-7d4fc7d867-dp7hb\" (UID: \"79076589-1a80-4683-a090-5aa445b6eba8\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-dp7hb" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.344472 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/79076589-1a80-4683-a090-5aa445b6eba8-apiservice-cert\") pod \"packageserver-7d4fc7d867-dp7hb\" (UID: \"79076589-1a80-4683-a090-5aa445b6eba8\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-dp7hb" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.346625 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bc3238c4-513a-495d-835d-da98864cdb8d-console-oauth-config\") pod \"console-64d44f6ddf-sx776\" (UID: \"bc3238c4-513a-495d-835d-da98864cdb8d\") " pod="openshift-console/console-64d44f6ddf-sx776" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.355753 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.361876 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1d9c615f-40e1-433c-9607-fbd841b62901-profile-collector-cert\") pod \"olm-operator-5cdf44d969-c42nb\" (UID: \"1d9c615f-40e1-433c-9607-fbd841b62901\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-c42nb" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.362865 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da6b7a25-5740-4b62-ab8d-dd83057a3d7a-secret-volume\") pod \"collect-profiles-29525760-gk87r\" (UID: \"da6b7a25-5740-4b62-ab8d-dd83057a3d7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525760-gk87r" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.375685 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.381865 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1d9c615f-40e1-433c-9607-fbd841b62901-srv-cert\") pod \"olm-operator-5cdf44d969-c42nb\" (UID: \"1d9c615f-40e1-433c-9607-fbd841b62901\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-c42nb" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.395286 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.400322 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/933616c4-9cd3-4c88-8863-111d8e2ec32b-config\") pod \"openshift-apiserver-operator-846cbfc458-b778b\" (UID: \"933616c4-9cd3-4c88-8863-111d8e2ec32b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-b778b" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.415575 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.425475 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/933616c4-9cd3-4c88-8863-111d8e2ec32b-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-b778b\" (UID: \"933616c4-9cd3-4c88-8863-111d8e2ec32b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-b778b" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.435725 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.439105 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ec63a06b-520e-47da-b761-74cc3462ebd7-cert\") pod \"ingress-canary-kcbw2\" (UID: \"ec63a06b-520e-47da-b761-74cc3462ebd7\") " pod="openshift-ingress-canary/ingress-canary-kcbw2" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.439301 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xp8rx\" (UniqueName: \"kubernetes.io/projected/ec63a06b-520e-47da-b761-74cc3462ebd7-kube-api-access-xp8rx\") pod \"ingress-canary-kcbw2\" (UID: \"ec63a06b-520e-47da-b761-74cc3462ebd7\") " pod="openshift-ingress-canary/ingress-canary-kcbw2" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.455210 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.474817 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.490504 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.490504 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j2l2p" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.490504 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.495776 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.516444 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.535739 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.542091 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f052856a-22ab-4525-92dc-3baef7ed956a-metrics-tls\") pod \"dns-operator-799b87ffcd-kv84j\" (UID: \"f052856a-22ab-4525-92dc-3baef7ed956a\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-kv84j" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.556275 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.576488 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.585470 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/20c92049-0ab0-4940-8a29-851dfe180b34-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-4sdjl\" (UID: \"20c92049-0ab0-4940-8a29-851dfe180b34\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-4sdjl" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.615888 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4tj6\" (UniqueName: \"kubernetes.io/projected/01658e78-84f0-4da9-8175-eea829ce2c41-kube-api-access-x4tj6\") pod \"apiserver-8596bd845d-8zn62\" (UID: \"01658e78-84f0-4da9-8175-eea829ce2c41\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zn62" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.624854 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zn62" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.637007 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/155d0fc1-3684-40be-aef9-fc97d74cc33c-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-5tc5g\" (UID: \"155d0fc1-3684-40be-aef9-fc97d74cc33c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5tc5g" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.659858 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4t26\" (UniqueName: \"kubernetes.io/projected/1094e93d-2606-43c0-8b23-334bab811610-kube-api-access-d4t26\") pod \"image-pruner-29525760-tv4jx\" (UID: \"1094e93d-2606-43c0-8b23-334bab811610\") " pod="openshift-image-registry/image-pruner-29525760-tv4jx" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.680617 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-87t9z\" (UniqueName: \"kubernetes.io/projected/c4f6c375-0a3f-4a66-908a-ac8180dba919-kube-api-access-87t9z\") pod \"controller-manager-65b6cccf98-szvd6\" (UID: \"c4f6c375-0a3f-4a66-908a-ac8180dba919\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-szvd6" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.695737 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt5mb\" (UniqueName: \"kubernetes.io/projected/9060b267-3596-4e76-a820-051838b5a5d9-kube-api-access-kt5mb\") pod \"machine-approver-54c688565-k2bhq\" (UID: \"9060b267-3596-4e76-a820-051838b5a5d9\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-k2bhq" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.719086 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctzgt\" (UniqueName: \"kubernetes.io/projected/155d0fc1-3684-40be-aef9-fc97d74cc33c-kube-api-access-ctzgt\") pod \"cluster-image-registry-operator-86c45576b9-5tc5g\" (UID: \"155d0fc1-3684-40be-aef9-fc97d74cc33c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5tc5g" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.736662 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.739444 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dpxj\" (UniqueName: \"kubernetes.io/projected/38dcb891-7354-413d-ba1d-016f0522c1bb-kube-api-access-4dpxj\") pod \"machine-api-operator-755bb95488-nksz9\" (UID: \"38dcb891-7354-413d-ba1d-016f0522c1bb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-nksz9" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.757605 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.772527 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9833f13d-3814-43ad-afef-381d884e5950-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-cvxll\" (UID: \"9833f13d-3814-43ad-afef-381d884e5950\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-cvxll" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.786606 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.789942 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9833f13d-3814-43ad-afef-381d884e5950-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-cvxll\" (UID: \"9833f13d-3814-43ad-afef-381d884e5950\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-cvxll" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.796028 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.821405 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-k2bhq" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.823026 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-8zn62"] Feb 20 00:10:18 crc kubenswrapper[5107]: W0220 00:10:18.832168 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01658e78_84f0_4da9_8175_eea829ce2c41.slice/crio-cb2eb069b80b112dfe49b40b99588f0860ce9fec63342285e9689026fe2c8ff3 WatchSource:0}: Error finding container cb2eb069b80b112dfe49b40b99588f0860ce9fec63342285e9689026fe2c8ff3: Status 404 returned error can't find the container with id cb2eb069b80b112dfe49b40b99588f0860ce9fec63342285e9689026fe2c8ff3 Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.832309 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxpxr\" (UniqueName: \"kubernetes.io/projected/4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c-kube-api-access-bxpxr\") pod \"apiserver-9ddfb9f55-4cqtl\" (UID: \"4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c\") " pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.835099 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.836104 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-szvd6" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.858079 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-nksz9" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.870954 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dl7zz\" (UniqueName: \"kubernetes.io/projected/38c88f45-4bc8-4153-962b-f3449bbb53ad-kube-api-access-dl7zz\") pod \"route-controller-manager-776cdc94d6-stc6t\" (UID: \"38c88f45-4bc8-4153-962b-f3449bbb53ad\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-stc6t" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.877818 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.879760 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da6b7a25-5740-4b62-ab8d-dd83057a3d7a-config-volume\") pod \"collect-profiles-29525760-gk87r\" (UID: \"da6b7a25-5740-4b62-ab8d-dd83057a3d7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525760-gk87r" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.882897 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29525760-tv4jx" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.895692 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.914256 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-stc6t" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.930669 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s95rn\" (UniqueName: \"kubernetes.io/projected/3ad513ca-9766-436c-9df6-b59c408fedc4-kube-api-access-s95rn\") pod \"cluster-samples-operator-6b564684c8-rd2fd\" (UID: \"3ad513ca-9766-436c-9df6-b59c408fedc4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rd2fd" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.955953 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzsdh\" (UniqueName: \"kubernetes.io/projected/3fabeb78-54c7-4333-af95-7722e3cfffb9-kube-api-access-pzsdh\") pod \"openshift-config-operator-5777786469-kgrwk\" (UID: \"3fabeb78-54c7-4333-af95-7722e3cfffb9\") " pod="openshift-config-operator/openshift-config-operator-5777786469-kgrwk" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.956546 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.961787 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5tc5g" Feb 20 00:10:18 crc kubenswrapper[5107]: I0220 00:10:18.976658 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.001078 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.008432 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55527a1c-71b2-4254-82ac-da17df407862-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-kdptd\" (UID: \"55527a1c-71b2-4254-82ac-da17df407862\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-kdptd" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.020697 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.027427 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55527a1c-71b2-4254-82ac-da17df407862-config\") pod \"openshift-kube-scheduler-operator-54f497555d-kdptd\" (UID: \"55527a1c-71b2-4254-82ac-da17df407862\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-kdptd" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.037114 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.037961 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-szvd6"] Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.044838 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.054711 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-nksz9"] Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.055788 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.075653 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.087020 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-k2bhq" event={"ID":"9060b267-3596-4e76-a820-051838b5a5d9","Type":"ContainerStarted","Data":"f7f21bf150aca9e0efe299ae3ab4810bf6a1769ff525f1d3e741046f8334d1ce"} Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.087942 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zn62" event={"ID":"01658e78-84f0-4da9-8175-eea829ce2c41","Type":"ContainerStarted","Data":"cb2eb069b80b112dfe49b40b99588f0860ce9fec63342285e9689026fe2c8ff3"} Feb 20 00:10:19 crc kubenswrapper[5107]: W0220 00:10:19.093384 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38dcb891_7354_413d_ba1d_016f0522c1bb.slice/crio-90bdb544f5784d3e0630eb424c80dbdb37d267f0c490bc7243d38aa9ff0edc21 WatchSource:0}: Error finding container 90bdb544f5784d3e0630eb424c80dbdb37d267f0c490bc7243d38aa9ff0edc21: Status 404 returned error can't find the container with id 90bdb544f5784d3e0630eb424c80dbdb37d267f0c490bc7243d38aa9ff0edc21 Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.095849 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.103359 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9332ca24-50c6-4625-8e97-a6fd5dd849f3-serving-cert\") pod \"service-ca-operator-5b9c976747-ph8fq\" (UID: \"9332ca24-50c6-4625-8e97-a6fd5dd849f3\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ph8fq" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.115645 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.118899 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9332ca24-50c6-4625-8e97-a6fd5dd849f3-config\") pod \"service-ca-operator-5b9c976747-ph8fq\" (UID: \"9332ca24-50c6-4625-8e97-a6fd5dd849f3\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ph8fq" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.130790 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29525760-tv4jx"] Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.135613 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.156092 5107 request.go:752] "Waited before sending request" delay="1.001107652s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmcc-proxy-tls&limit=500&resourceVersion=0" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.158886 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.171681 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a78d5238-801b-4521-91d2-6b9bed68d61e-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-m4t5z\" (UID: \"a78d5238-801b-4521-91d2-6b9bed68d61e\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-m4t5z" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.175836 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.186019 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-stc6t"] Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.196313 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.202766 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rd2fd" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.215335 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.217096 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-5tc5g"] Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.235323 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Feb 20 00:10:19 crc kubenswrapper[5107]: W0220 00:10:19.245752 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod155d0fc1_3684_40be_aef9_fc97d74cc33c.slice/crio-1d6f7cf81fabee7efb555e707fc5ac58ebad0aa56e7fccdc60e3f3d038ccd4c3 WatchSource:0}: Error finding container 1d6f7cf81fabee7efb555e707fc5ac58ebad0aa56e7fccdc60e3f3d038ccd4c3: Status 404 returned error can't find the container with id 1d6f7cf81fabee7efb555e707fc5ac58ebad0aa56e7fccdc60e3f3d038ccd4c3 Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.250183 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-kgrwk" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.256575 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.257579 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-4cqtl"] Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.275334 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.297921 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.316122 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.349041 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.356932 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.376545 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.396991 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.415398 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.425530 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rd2fd"] Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.437727 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Feb 20 00:10:19 crc kubenswrapper[5107]: E0220 00:10:19.439541 5107 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 20 00:10:19 crc kubenswrapper[5107]: E0220 00:10:19.439617 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec63a06b-520e-47da-b761-74cc3462ebd7-cert podName:ec63a06b-520e-47da-b761-74cc3462ebd7 nodeName:}" failed. No retries permitted until 2026-02-20 00:10:19.93959081 +0000 UTC m=+106.308248376 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ec63a06b-520e-47da-b761-74cc3462ebd7-cert") pod "ingress-canary-kcbw2" (UID: "ec63a06b-520e-47da-b761-74cc3462ebd7") : failed to sync secret cache: timed out waiting for the condition Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.455148 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.468136 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-kgrwk"] Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.475298 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.486725 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.494967 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.516295 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.535417 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.556477 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.576206 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.596972 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.615936 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.636023 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.656667 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.676392 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.706199 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.714960 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.742060 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.755721 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.776340 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.796040 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.816858 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.836283 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.858032 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.876062 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.895178 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.915466 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.935896 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.956064 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.967328 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ec63a06b-520e-47da-b761-74cc3462ebd7-cert\") pod \"ingress-canary-kcbw2\" (UID: \"ec63a06b-520e-47da-b761-74cc3462ebd7\") " pod="openshift-ingress-canary/ingress-canary-kcbw2" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.975613 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Feb 20 00:10:19 crc kubenswrapper[5107]: I0220 00:10:19.995774 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.015860 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.036351 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.056405 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.076777 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.093841 5107 generic.go:358] "Generic (PLEG): container finished" podID="3fabeb78-54c7-4333-af95-7722e3cfffb9" containerID="26bc23fed3fe46887aeeda1d9f25837387d4a157cd79654357423573d7e45996" exitCode=0 Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.093935 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-kgrwk" event={"ID":"3fabeb78-54c7-4333-af95-7722e3cfffb9","Type":"ContainerDied","Data":"26bc23fed3fe46887aeeda1d9f25837387d4a157cd79654357423573d7e45996"} Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.093982 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-kgrwk" event={"ID":"3fabeb78-54c7-4333-af95-7722e3cfffb9","Type":"ContainerStarted","Data":"a817e20fd00ba8b5eeae354f24f1d469c9703e0ac2268d4bc246fb7181b53efa"} Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.096715 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.097691 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5tc5g" event={"ID":"155d0fc1-3684-40be-aef9-fc97d74cc33c","Type":"ContainerStarted","Data":"827ba8c459191846ae8ce257661e1cbc8a0129e0e2502fe3e0dee9ae89603e1b"} Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.097741 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5tc5g" event={"ID":"155d0fc1-3684-40be-aef9-fc97d74cc33c","Type":"ContainerStarted","Data":"1d6f7cf81fabee7efb555e707fc5ac58ebad0aa56e7fccdc60e3f3d038ccd4c3"} Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.099893 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-stc6t" event={"ID":"38c88f45-4bc8-4153-962b-f3449bbb53ad","Type":"ContainerStarted","Data":"f2645fd0f3e75391e6f4c115d95bb8c0c6b966ef1f116b66c79912c293657c0e"} Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.099927 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-stc6t" event={"ID":"38c88f45-4bc8-4153-962b-f3449bbb53ad","Type":"ContainerStarted","Data":"19d23b8edb7506b9de999af2320529e102f0861374b736ca2b045f7732478975"} Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.100137 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-stc6t" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.101878 5107 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-stc6t container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.101954 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-stc6t" podUID="38c88f45-4bc8-4153-962b-f3449bbb53ad" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.102513 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-szvd6" event={"ID":"c4f6c375-0a3f-4a66-908a-ac8180dba919","Type":"ContainerStarted","Data":"b9e636dc729eb1510c99f1000be11203ed0f0cdaa6e898b0ab477523bde5c7d6"} Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.102557 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-szvd6" event={"ID":"c4f6c375-0a3f-4a66-908a-ac8180dba919","Type":"ContainerStarted","Data":"2494ccf44b78b81a06691452e59d0d54c5501b6b3279db79cff065ecb6b628a9"} Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.102764 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-szvd6" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.104560 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-k2bhq" event={"ID":"9060b267-3596-4e76-a820-051838b5a5d9","Type":"ContainerStarted","Data":"10845ec12d0278ccf631406893bb9cd1363b8fc985b5bcbf352ae0d7554cdce9"} Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.104585 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-k2bhq" event={"ID":"9060b267-3596-4e76-a820-051838b5a5d9","Type":"ContainerStarted","Data":"5a8d0ae37b5f91763040e96d90a07c4798cf5ee4a5764517e572e9f483a8971f"} Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.105625 5107 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-szvd6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.105684 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-szvd6" podUID="c4f6c375-0a3f-4a66-908a-ac8180dba919" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.106133 5107 generic.go:358] "Generic (PLEG): container finished" podID="01658e78-84f0-4da9-8175-eea829ce2c41" containerID="ac9caf1eb945449a5f663950f37c5044ecf3726b664c3245be1186fc8009a839" exitCode=0 Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.106516 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zn62" event={"ID":"01658e78-84f0-4da9-8175-eea829ce2c41","Type":"ContainerDied","Data":"ac9caf1eb945449a5f663950f37c5044ecf3726b664c3245be1186fc8009a839"} Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.108287 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rd2fd" event={"ID":"3ad513ca-9766-436c-9df6-b59c408fedc4","Type":"ContainerStarted","Data":"99030a412e7ddedbb6e3addf5bd2d7056694f91928e80bd679517428766b9e13"} Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.108314 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rd2fd" event={"ID":"3ad513ca-9766-436c-9df6-b59c408fedc4","Type":"ContainerStarted","Data":"7f9597d37e4ca08f8e3898fe4abb90a860a92c31dd06c4e75541aa8730ef6626"} Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.108323 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rd2fd" event={"ID":"3ad513ca-9766-436c-9df6-b59c408fedc4","Type":"ContainerStarted","Data":"8f00a7ba538b3eae9001b426cb453b01629a8f3ba3d4bc00dd6c206dfd6f8a81"} Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.110268 5107 generic.go:358] "Generic (PLEG): container finished" podID="4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c" containerID="104d3ed05111ebf2727e489f7be17fc9c240a393e3d9f1aa66cbb4e6ef8f4700" exitCode=0 Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.110370 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" event={"ID":"4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c","Type":"ContainerDied","Data":"104d3ed05111ebf2727e489f7be17fc9c240a393e3d9f1aa66cbb4e6ef8f4700"} Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.110410 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" event={"ID":"4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c","Type":"ContainerStarted","Data":"b35a965e4fdba7a261e967e1c96cbb1eae8952e2820d1496ea768da7fcb9a9f0"} Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.113781 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-nksz9" event={"ID":"38dcb891-7354-413d-ba1d-016f0522c1bb","Type":"ContainerStarted","Data":"e141e6ec4e75fc5438b5f248a8ee663c5b40d7bc705f4856d5df2c945dbf36ef"} Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.113811 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-nksz9" event={"ID":"38dcb891-7354-413d-ba1d-016f0522c1bb","Type":"ContainerStarted","Data":"72413020b1c94b8448da8db9b058f517beabeb28cfb770fbfcf9b162c5f17447"} Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.113822 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-nksz9" event={"ID":"38dcb891-7354-413d-ba1d-016f0522c1bb","Type":"ContainerStarted","Data":"90bdb544f5784d3e0630eb424c80dbdb37d267f0c490bc7243d38aa9ff0edc21"} Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.115714 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29525760-tv4jx" event={"ID":"1094e93d-2606-43c0-8b23-334bab811610","Type":"ContainerStarted","Data":"6544fac22d4c5363837b02856094b1f794bd587c6dbb0e8af4e12b4ff2fb4947"} Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.115764 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29525760-tv4jx" event={"ID":"1094e93d-2606-43c0-8b23-334bab811610","Type":"ContainerStarted","Data":"7270978b4cb1c772b8f829a5c4757213085ec222c7b2fd40f2560389d4de7df1"} Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.115878 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.135291 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.157301 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.174257 5107 request.go:752] "Waited before sending request" delay="1.980579721s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dcni-sysctl-allowlist&limit=500&resourceVersion=0" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.198403 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.198544 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.215621 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.230787 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ec63a06b-520e-47da-b761-74cc3462ebd7-cert\") pod \"ingress-canary-kcbw2\" (UID: \"ec63a06b-520e-47da-b761-74cc3462ebd7\") " pod="openshift-ingress-canary/ingress-canary-kcbw2" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.235900 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.256233 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.291281 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p72jd\" (UniqueName: \"kubernetes.io/projected/2aaa38bf-6f3d-4f28-8634-564d553f87a6-kube-api-access-p72jd\") pod \"openshift-controller-manager-operator-686468bdd5-f44cp\" (UID: \"2aaa38bf-6f3d-4f28-8634-564d553f87a6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-f44cp" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.312060 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84d08a6b-ab51-44fb-a216-4a9beb9e9141-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-5wdfs\" (UID: \"84d08a6b-ab51-44fb-a216-4a9beb9e9141\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-5wdfs" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.329773 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz2nf\" (UniqueName: \"kubernetes.io/projected/b3931e83-9df1-49f4-8f33-5ca09792a062-kube-api-access-lz2nf\") pod \"authentication-operator-7f5c659b84-c9dcq\" (UID: \"b3931e83-9df1-49f4-8f33-5ca09792a062\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c9dcq" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.352617 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmh95\" (UniqueName: \"kubernetes.io/projected/a0f08f8e-efec-4c49-b5e8-7ac2b96bb5ee-kube-api-access-zmh95\") pod \"service-ca-74545575db-pv5z4\" (UID: \"a0f08f8e-efec-4c49-b5e8-7ac2b96bb5ee\") " pod="openshift-service-ca/service-ca-74545575db-pv5z4" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.372165 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjdjb\" (UniqueName: \"kubernetes.io/projected/dccf73a9-9c9e-4211-8b18-ed7d205bf9d1-kube-api-access-bjdjb\") pod \"console-operator-67c89758df-rq6pw\" (UID: \"dccf73a9-9c9e-4211-8b18-ed7d205bf9d1\") " pod="openshift-console-operator/console-operator-67c89758df-rq6pw" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.390565 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwkzk\" (UniqueName: \"kubernetes.io/projected/6623692b-4959-4045-8da6-f64819b323e9-kube-api-access-nwkzk\") pod \"control-plane-machine-set-operator-75ffdb6fcd-tcpmn\" (UID: \"6623692b-4959-4045-8da6-f64819b323e9\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tcpmn" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.412741 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6fe8d69b-d257-4f34-b535-177002797675-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-mr4hv\" (UID: \"6fe8d69b-d257-4f34-b535-177002797675\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mr4hv" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.461523 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb27m\" (UniqueName: \"kubernetes.io/projected/88f0f58c-020c-4ce1-9f3d-3aa63ae92ddd-kube-api-access-wb27m\") pod \"machine-config-operator-67c9d58cbb-579ws\" (UID: \"88f0f58c-020c-4ce1-9f3d-3aa63ae92ddd\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-579ws" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.471401 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zvb9\" (UniqueName: \"kubernetes.io/projected/cfdfef0c-4111-4a89-aa5a-bf317fc4a772-kube-api-access-8zvb9\") pod \"downloads-747b44746d-p7fkg\" (UID: \"cfdfef0c-4111-4a89-aa5a-bf317fc4a772\") " pod="openshift-console/downloads-747b44746d-p7fkg" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.474172 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-p7fkg" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.492453 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-894d9\" (UniqueName: \"kubernetes.io/projected/b78a9450-1db5-496b-a8e7-2c12d8e5525f-kube-api-access-894d9\") pod \"multus-admission-controller-69db94689b-zg2vd\" (UID: \"b78a9450-1db5-496b-a8e7-2c12d8e5525f\") " pod="openshift-multus/multus-admission-controller-69db94689b-zg2vd" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.518091 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcgt9\" (UniqueName: \"kubernetes.io/projected/6fe8d69b-d257-4f34-b535-177002797675-kube-api-access-pcgt9\") pod \"ingress-operator-6b9cb4dbcf-mr4hv\" (UID: \"6fe8d69b-d257-4f34-b535-177002797675\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mr4hv" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.533864 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-f44cp" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.552082 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c9dcq" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.567072 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-579ws" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.573049 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-27x2r\" (UniqueName: \"kubernetes.io/projected/f052856a-22ab-4525-92dc-3baef7ed956a-kube-api-access-27x2r\") pod \"dns-operator-799b87ffcd-kv84j\" (UID: \"f052856a-22ab-4525-92dc-3baef7ed956a\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-kv84j" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.583421 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-5wdfs" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.586038 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-zg2vd" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.595455 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mr4hv" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.607579 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-rq6pw" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.608043 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tcpmn" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.610923 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdxx4\" (UniqueName: \"kubernetes.io/projected/933616c4-9cd3-4c88-8863-111d8e2ec32b-kube-api-access-vdxx4\") pod \"openshift-apiserver-operator-846cbfc458-b778b\" (UID: \"933616c4-9cd3-4c88-8863-111d8e2ec32b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-b778b" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.615368 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-pv5z4" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.625640 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-frcsk\" (UniqueName: \"kubernetes.io/projected/9833f13d-3814-43ad-afef-381d884e5950-kube-api-access-frcsk\") pod \"marketplace-operator-547dbd544d-cvxll\" (UID: \"9833f13d-3814-43ad-afef-381d884e5950\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-cvxll" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.634768 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxdkv\" (UniqueName: \"kubernetes.io/projected/20c92049-0ab0-4940-8a29-851dfe180b34-kube-api-access-xxdkv\") pod \"package-server-manager-77f986bd66-4sdjl\" (UID: \"20c92049-0ab0-4940-8a29-851dfe180b34\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-4sdjl" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.638751 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqsnk\" (UniqueName: \"kubernetes.io/projected/e0875e0e-3239-42ce-b8d5-aecba1e04f68-kube-api-access-vqsnk\") pod \"kube-storage-version-migrator-operator-565b79b866-26g2b\" (UID: \"e0875e0e-3239-42ce-b8d5-aecba1e04f68\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-26g2b" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.639185 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fswbl\" (UniqueName: \"kubernetes.io/projected/79076589-1a80-4683-a090-5aa445b6eba8-kube-api-access-fswbl\") pod \"packageserver-7d4fc7d867-dp7hb\" (UID: \"79076589-1a80-4683-a090-5aa445b6eba8\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-dp7hb" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.646559 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-b778b" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.651946 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-kv84j" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.655167 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n72jk\" (UniqueName: \"kubernetes.io/projected/1d9c615f-40e1-433c-9607-fbd841b62901-kube-api-access-n72jk\") pod \"olm-operator-5cdf44d969-c42nb\" (UID: \"1d9c615f-40e1-433c-9607-fbd841b62901\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-c42nb" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.659622 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-4sdjl" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.666475 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-cvxll" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.695207 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-67jwx\" (UniqueName: \"kubernetes.io/projected/a78d5238-801b-4521-91d2-6b9bed68d61e-kube-api-access-67jwx\") pod \"machine-config-controller-f9cdd68f7-m4t5z\" (UID: \"a78d5238-801b-4521-91d2-6b9bed68d61e\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-m4t5z" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.695600 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-m4t5z" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.704040 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7kj8\" (UniqueName: \"kubernetes.io/projected/da6b7a25-5740-4b62-ab8d-dd83057a3d7a-kube-api-access-l7kj8\") pod \"collect-profiles-29525760-gk87r\" (UID: \"da6b7a25-5740-4b62-ab8d-dd83057a3d7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525760-gk87r" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.716558 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9pkd\" (UniqueName: \"kubernetes.io/projected/bc3238c4-513a-495d-835d-da98864cdb8d-kube-api-access-t9pkd\") pod \"console-64d44f6ddf-sx776\" (UID: \"bc3238c4-513a-495d-835d-da98864cdb8d\") " pod="openshift-console/console-64d44f6ddf-sx776" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.742683 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55527a1c-71b2-4254-82ac-da17df407862-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-kdptd\" (UID: \"55527a1c-71b2-4254-82ac-da17df407862\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-kdptd" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.751611 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvh8q\" (UniqueName: \"kubernetes.io/projected/9332ca24-50c6-4625-8e97-a6fd5dd849f3-kube-api-access-wvh8q\") pod \"service-ca-operator-5b9c976747-ph8fq\" (UID: \"9332ca24-50c6-4625-8e97-a6fd5dd849f3\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ph8fq" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.755043 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-p7fkg"] Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.801400 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.815868 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.816705 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp8rx\" (UniqueName: \"kubernetes.io/projected/ec63a06b-520e-47da-b761-74cc3462ebd7-kube-api-access-xp8rx\") pod \"ingress-canary-kcbw2\" (UID: \"ec63a06b-520e-47da-b761-74cc3462ebd7\") " pod="openshift-ingress-canary/ingress-canary-kcbw2" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.838906 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.855937 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.859638 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-26g2b" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.879292 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.898349 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.921600 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-sx776" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.927975 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-dp7hb" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.936826 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-c42nb" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.972441 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525760-gk87r" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.978031 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-kdptd" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.984232 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ph8fq" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.994939 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e282ef7a-b9af-4a1f-b4c9-7f9544c4ebcf-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-klv4w\" (UID: \"e282ef7a-b9af-4a1f-b4c9-7f9544c4ebcf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-klv4w" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.994974 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e282ef7a-b9af-4a1f-b4c9-7f9544c4ebcf-tmpfs\") pod \"catalog-operator-75ff9f647d-klv4w\" (UID: \"e282ef7a-b9af-4a1f-b4c9-7f9544c4ebcf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-klv4w" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.995000 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ff5f4955-00cc-43bc-aee5-55712109ce87-etcd-service-ca\") pod \"etcd-operator-69b85846b6-lw4ks\" (UID: \"ff5f4955-00cc-43bc-aee5-55712109ce87\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lw4ks" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.995023 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c7008e5-9282-4238-a23b-67c75f7cc997-config\") pod \"kube-apiserver-operator-575994946d-gcwx4\" (UID: \"5c7008e5-9282-4238-a23b-67c75f7cc997\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-gcwx4" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.995171 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/01d70318-38f6-4dc0-acc4-36458ccf419c-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-wrrss\" (UID: \"01d70318-38f6-4dc0-acc4-36458ccf419c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-wrrss" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.995214 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.995260 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c8wx\" (UniqueName: \"kubernetes.io/projected/b7763f2e-cc78-4dd1-a5d8-599e880ed627-kube-api-access-6c8wx\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.995306 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.995332 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.995413 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.995489 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9644e65-d917-4c28-a428-743979d10f4e-registry-tls\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.995513 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9644e65-d917-4c28-a428-743979d10f4e-installation-pull-secrets\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.995535 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ae539524-1c1e-4e63-b76e-f5f8403e3734-node-bootstrap-token\") pod \"machine-config-server-85dkw\" (UID: \"ae539524-1c1e-4e63-b76e-f5f8403e3734\") " pod="openshift-machine-config-operator/machine-config-server-85dkw" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.995571 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7763f2e-cc78-4dd1-a5d8-599e880ed627-audit-policies\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.995619 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.995642 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.995664 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.998690 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7763f2e-cc78-4dd1-a5d8-599e880ed627-audit-dir\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.998791 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/01d70318-38f6-4dc0-acc4-36458ccf419c-ready\") pod \"cni-sysctl-allowlist-ds-wrrss\" (UID: \"01d70318-38f6-4dc0-acc4-36458ccf419c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-wrrss" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.998838 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ff5f4955-00cc-43bc-aee5-55712109ce87-etcd-client\") pod \"etcd-operator-69b85846b6-lw4ks\" (UID: \"ff5f4955-00cc-43bc-aee5-55712109ce87\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lw4ks" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.998858 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm7vk\" (UniqueName: \"kubernetes.io/projected/ff5f4955-00cc-43bc-aee5-55712109ce87-kube-api-access-rm7vk\") pod \"etcd-operator-69b85846b6-lw4ks\" (UID: \"ff5f4955-00cc-43bc-aee5-55712109ce87\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lw4ks" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.998906 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c1baa873-3e83-4156-a28a-002a10a6147a-metrics-tls\") pod \"dns-default-x8c42\" (UID: \"c1baa873-3e83-4156-a28a-002a10a6147a\") " pod="openshift-dns/dns-default-x8c42" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.998945 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6368211b-5c56-4570-a4e6-b6cf86b392f2-metrics-certs\") pod \"router-default-68cf44c8b8-4ltpk\" (UID: \"6368211b-5c56-4570-a4e6-b6cf86b392f2\") " pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.998986 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crjdv\" (UniqueName: \"kubernetes.io/projected/b4cec451-c20b-4fbe-bfc6-cac323ecd942-kube-api-access-crjdv\") pod \"csi-hostpathplugin-24bsp\" (UID: \"b4cec451-c20b-4fbe-bfc6-cac323ecd942\") " pod="hostpath-provisioner/csi-hostpathplugin-24bsp" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.999034 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9644e65-d917-4c28-a428-743979d10f4e-bound-sa-token\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.999083 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b4cec451-c20b-4fbe-bfc6-cac323ecd942-registration-dir\") pod \"csi-hostpathplugin-24bsp\" (UID: \"b4cec451-c20b-4fbe-bfc6-cac323ecd942\") " pod="hostpath-provisioner/csi-hostpathplugin-24bsp" Feb 20 00:10:20 crc kubenswrapper[5107]: E0220 00:10:20.999259 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:21.499243354 +0000 UTC m=+107.867900920 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.999115 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-799w4\" (UniqueName: \"kubernetes.io/projected/c1baa873-3e83-4156-a28a-002a10a6147a-kube-api-access-799w4\") pod \"dns-default-x8c42\" (UID: \"c1baa873-3e83-4156-a28a-002a10a6147a\") " pod="openshift-dns/dns-default-x8c42" Feb 20 00:10:20 crc kubenswrapper[5107]: I0220 00:10:20.999933 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b4cec451-c20b-4fbe-bfc6-cac323ecd942-csi-data-dir\") pod \"csi-hostpathplugin-24bsp\" (UID: \"b4cec451-c20b-4fbe-bfc6-cac323ecd942\") " pod="hostpath-provisioner/csi-hostpathplugin-24bsp" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.000023 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkcd6\" (UniqueName: \"kubernetes.io/projected/f9644e65-d917-4c28-a428-743979d10f4e-kube-api-access-xkcd6\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.000046 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e282ef7a-b9af-4a1f-b4c9-7f9544c4ebcf-srv-cert\") pod \"catalog-operator-75ff9f647d-klv4w\" (UID: \"e282ef7a-b9af-4a1f-b4c9-7f9544c4ebcf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-klv4w" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.000195 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9644e65-d917-4c28-a428-743979d10f4e-trusted-ca\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.000218 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhkn9\" (UniqueName: \"kubernetes.io/projected/ae539524-1c1e-4e63-b76e-f5f8403e3734-kube-api-access-bhkn9\") pod \"machine-config-server-85dkw\" (UID: \"ae539524-1c1e-4e63-b76e-f5f8403e3734\") " pod="openshift-machine-config-operator/machine-config-server-85dkw" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.000312 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6368211b-5c56-4570-a4e6-b6cf86b392f2-default-certificate\") pod \"router-default-68cf44c8b8-4ltpk\" (UID: \"6368211b-5c56-4570-a4e6-b6cf86b392f2\") " pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.000358 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c7008e5-9282-4238-a23b-67c75f7cc997-serving-cert\") pod \"kube-apiserver-operator-575994946d-gcwx4\" (UID: \"5c7008e5-9282-4238-a23b-67c75f7cc997\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-gcwx4" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.000702 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.000748 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ss5p\" (UniqueName: \"kubernetes.io/projected/e282ef7a-b9af-4a1f-b4c9-7f9544c4ebcf-kube-api-access-5ss5p\") pod \"catalog-operator-75ff9f647d-klv4w\" (UID: \"e282ef7a-b9af-4a1f-b4c9-7f9544c4ebcf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-klv4w" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.000880 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5c7008e5-9282-4238-a23b-67c75f7cc997-tmp-dir\") pod \"kube-apiserver-operator-575994946d-gcwx4\" (UID: \"5c7008e5-9282-4238-a23b-67c75f7cc997\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-gcwx4" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.003506 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/01d70318-38f6-4dc0-acc4-36458ccf419c-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-wrrss\" (UID: \"01d70318-38f6-4dc0-acc4-36458ccf419c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-wrrss" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.003785 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b4cec451-c20b-4fbe-bfc6-cac323ecd942-socket-dir\") pod \"csi-hostpathplugin-24bsp\" (UID: \"b4cec451-c20b-4fbe-bfc6-cac323ecd942\") " pod="hostpath-provisioner/csi-hostpathplugin-24bsp" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.003814 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c7008e5-9282-4238-a23b-67c75f7cc997-kube-api-access\") pod \"kube-apiserver-operator-575994946d-gcwx4\" (UID: \"5c7008e5-9282-4238-a23b-67c75f7cc997\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-gcwx4" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.009008 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9644e65-d917-4c28-a428-743979d10f4e-registry-certificates\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.009132 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff5f4955-00cc-43bc-aee5-55712109ce87-serving-cert\") pod \"etcd-operator-69b85846b6-lw4ks\" (UID: \"ff5f4955-00cc-43bc-aee5-55712109ce87\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lw4ks" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.009340 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mjg8\" (UniqueName: \"kubernetes.io/projected/01d70318-38f6-4dc0-acc4-36458ccf419c-kube-api-access-6mjg8\") pod \"cni-sysctl-allowlist-ds-wrrss\" (UID: \"01d70318-38f6-4dc0-acc4-36458ccf419c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-wrrss" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.009490 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.009562 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b4cec451-c20b-4fbe-bfc6-cac323ecd942-mountpoint-dir\") pod \"csi-hostpathplugin-24bsp\" (UID: \"b4cec451-c20b-4fbe-bfc6-cac323ecd942\") " pod="hostpath-provisioner/csi-hostpathplugin-24bsp" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.009598 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ff5f4955-00cc-43bc-aee5-55712109ce87-tmp-dir\") pod \"etcd-operator-69b85846b6-lw4ks\" (UID: \"ff5f4955-00cc-43bc-aee5-55712109ce87\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lw4ks" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.009647 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.009670 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c1baa873-3e83-4156-a28a-002a10a6147a-tmp-dir\") pod \"dns-default-x8c42\" (UID: \"c1baa873-3e83-4156-a28a-002a10a6147a\") " pod="openshift-dns/dns-default-x8c42" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.009692 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b4cec451-c20b-4fbe-bfc6-cac323ecd942-plugins-dir\") pod \"csi-hostpathplugin-24bsp\" (UID: \"b4cec451-c20b-4fbe-bfc6-cac323ecd942\") " pod="hostpath-provisioner/csi-hostpathplugin-24bsp" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.009757 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5f4955-00cc-43bc-aee5-55712109ce87-config\") pod \"etcd-operator-69b85846b6-lw4ks\" (UID: \"ff5f4955-00cc-43bc-aee5-55712109ce87\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lw4ks" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.010223 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.010255 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s72zb\" (UniqueName: \"kubernetes.io/projected/b2556f1d-c6f7-47d6-adf8-b2e5fd522346-kube-api-access-s72zb\") pod \"migrator-866fcbc849-wkzt7\" (UID: \"b2556f1d-c6f7-47d6-adf8-b2e5fd522346\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-wkzt7" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.010319 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ae539524-1c1e-4e63-b76e-f5f8403e3734-certs\") pod \"machine-config-server-85dkw\" (UID: \"ae539524-1c1e-4e63-b76e-f5f8403e3734\") " pod="openshift-machine-config-operator/machine-config-server-85dkw" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.010344 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnw4s\" (UniqueName: \"kubernetes.io/projected/6368211b-5c56-4570-a4e6-b6cf86b392f2-kube-api-access-tnw4s\") pod \"router-default-68cf44c8b8-4ltpk\" (UID: \"6368211b-5c56-4570-a4e6-b6cf86b392f2\") " pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.010473 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1baa873-3e83-4156-a28a-002a10a6147a-config-volume\") pod \"dns-default-x8c42\" (UID: \"c1baa873-3e83-4156-a28a-002a10a6147a\") " pod="openshift-dns/dns-default-x8c42" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.010640 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6368211b-5c56-4570-a4e6-b6cf86b392f2-service-ca-bundle\") pod \"router-default-68cf44c8b8-4ltpk\" (UID: \"6368211b-5c56-4570-a4e6-b6cf86b392f2\") " pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.010752 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.013780 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9644e65-d917-4c28-a428-743979d10f4e-ca-trust-extracted\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.013810 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6368211b-5c56-4570-a4e6-b6cf86b392f2-stats-auth\") pod \"router-default-68cf44c8b8-4ltpk\" (UID: \"6368211b-5c56-4570-a4e6-b6cf86b392f2\") " pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.013846 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ff5f4955-00cc-43bc-aee5-55712109ce87-etcd-ca\") pod \"etcd-operator-69b85846b6-lw4ks\" (UID: \"ff5f4955-00cc-43bc-aee5-55712109ce87\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lw4ks" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.084162 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-kcbw2" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.116804 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.117247 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff5f4955-00cc-43bc-aee5-55712109ce87-serving-cert\") pod \"etcd-operator-69b85846b6-lw4ks\" (UID: \"ff5f4955-00cc-43bc-aee5-55712109ce87\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lw4ks" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.117274 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6mjg8\" (UniqueName: \"kubernetes.io/projected/01d70318-38f6-4dc0-acc4-36458ccf419c-kube-api-access-6mjg8\") pod \"cni-sysctl-allowlist-ds-wrrss\" (UID: \"01d70318-38f6-4dc0-acc4-36458ccf419c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-wrrss" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.117293 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.117311 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b4cec451-c20b-4fbe-bfc6-cac323ecd942-mountpoint-dir\") pod \"csi-hostpathplugin-24bsp\" (UID: \"b4cec451-c20b-4fbe-bfc6-cac323ecd942\") " pod="hostpath-provisioner/csi-hostpathplugin-24bsp" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.117326 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ff5f4955-00cc-43bc-aee5-55712109ce87-tmp-dir\") pod \"etcd-operator-69b85846b6-lw4ks\" (UID: \"ff5f4955-00cc-43bc-aee5-55712109ce87\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lw4ks" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.117340 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.117357 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c1baa873-3e83-4156-a28a-002a10a6147a-tmp-dir\") pod \"dns-default-x8c42\" (UID: \"c1baa873-3e83-4156-a28a-002a10a6147a\") " pod="openshift-dns/dns-default-x8c42" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.117373 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b4cec451-c20b-4fbe-bfc6-cac323ecd942-plugins-dir\") pod \"csi-hostpathplugin-24bsp\" (UID: \"b4cec451-c20b-4fbe-bfc6-cac323ecd942\") " pod="hostpath-provisioner/csi-hostpathplugin-24bsp" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.117388 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5f4955-00cc-43bc-aee5-55712109ce87-config\") pod \"etcd-operator-69b85846b6-lw4ks\" (UID: \"ff5f4955-00cc-43bc-aee5-55712109ce87\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lw4ks" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.117418 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.117433 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s72zb\" (UniqueName: \"kubernetes.io/projected/b2556f1d-c6f7-47d6-adf8-b2e5fd522346-kube-api-access-s72zb\") pod \"migrator-866fcbc849-wkzt7\" (UID: \"b2556f1d-c6f7-47d6-adf8-b2e5fd522346\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-wkzt7" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.117449 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ae539524-1c1e-4e63-b76e-f5f8403e3734-certs\") pod \"machine-config-server-85dkw\" (UID: \"ae539524-1c1e-4e63-b76e-f5f8403e3734\") " pod="openshift-machine-config-operator/machine-config-server-85dkw" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.117464 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tnw4s\" (UniqueName: \"kubernetes.io/projected/6368211b-5c56-4570-a4e6-b6cf86b392f2-kube-api-access-tnw4s\") pod \"router-default-68cf44c8b8-4ltpk\" (UID: \"6368211b-5c56-4570-a4e6-b6cf86b392f2\") " pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.117487 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1baa873-3e83-4156-a28a-002a10a6147a-config-volume\") pod \"dns-default-x8c42\" (UID: \"c1baa873-3e83-4156-a28a-002a10a6147a\") " pod="openshift-dns/dns-default-x8c42" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.117507 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6368211b-5c56-4570-a4e6-b6cf86b392f2-service-ca-bundle\") pod \"router-default-68cf44c8b8-4ltpk\" (UID: \"6368211b-5c56-4570-a4e6-b6cf86b392f2\") " pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.117555 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.117574 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9644e65-d917-4c28-a428-743979d10f4e-ca-trust-extracted\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.117588 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6368211b-5c56-4570-a4e6-b6cf86b392f2-stats-auth\") pod \"router-default-68cf44c8b8-4ltpk\" (UID: \"6368211b-5c56-4570-a4e6-b6cf86b392f2\") " pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.117606 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ff5f4955-00cc-43bc-aee5-55712109ce87-etcd-ca\") pod \"etcd-operator-69b85846b6-lw4ks\" (UID: \"ff5f4955-00cc-43bc-aee5-55712109ce87\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lw4ks" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.117625 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e282ef7a-b9af-4a1f-b4c9-7f9544c4ebcf-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-klv4w\" (UID: \"e282ef7a-b9af-4a1f-b4c9-7f9544c4ebcf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-klv4w" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.117641 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e282ef7a-b9af-4a1f-b4c9-7f9544c4ebcf-tmpfs\") pod \"catalog-operator-75ff9f647d-klv4w\" (UID: \"e282ef7a-b9af-4a1f-b4c9-7f9544c4ebcf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-klv4w" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.117655 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ff5f4955-00cc-43bc-aee5-55712109ce87-etcd-service-ca\") pod \"etcd-operator-69b85846b6-lw4ks\" (UID: \"ff5f4955-00cc-43bc-aee5-55712109ce87\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lw4ks" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.117670 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c7008e5-9282-4238-a23b-67c75f7cc997-config\") pod \"kube-apiserver-operator-575994946d-gcwx4\" (UID: \"5c7008e5-9282-4238-a23b-67c75f7cc997\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-gcwx4" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.117688 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/01d70318-38f6-4dc0-acc4-36458ccf419c-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-wrrss\" (UID: \"01d70318-38f6-4dc0-acc4-36458ccf419c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-wrrss" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.117707 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.117726 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6c8wx\" (UniqueName: \"kubernetes.io/projected/b7763f2e-cc78-4dd1-a5d8-599e880ed627-kube-api-access-6c8wx\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.117745 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.117778 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.117802 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9644e65-d917-4c28-a428-743979d10f4e-registry-tls\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.117817 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9644e65-d917-4c28-a428-743979d10f4e-installation-pull-secrets\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.117835 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ae539524-1c1e-4e63-b76e-f5f8403e3734-node-bootstrap-token\") pod \"machine-config-server-85dkw\" (UID: \"ae539524-1c1e-4e63-b76e-f5f8403e3734\") " pod="openshift-machine-config-operator/machine-config-server-85dkw" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.117854 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7763f2e-cc78-4dd1-a5d8-599e880ed627-audit-policies\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.117873 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.117888 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.117906 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.117928 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7763f2e-cc78-4dd1-a5d8-599e880ed627-audit-dir\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.117954 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/01d70318-38f6-4dc0-acc4-36458ccf419c-ready\") pod \"cni-sysctl-allowlist-ds-wrrss\" (UID: \"01d70318-38f6-4dc0-acc4-36458ccf419c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-wrrss" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.117971 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ff5f4955-00cc-43bc-aee5-55712109ce87-etcd-client\") pod \"etcd-operator-69b85846b6-lw4ks\" (UID: \"ff5f4955-00cc-43bc-aee5-55712109ce87\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lw4ks" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.117985 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rm7vk\" (UniqueName: \"kubernetes.io/projected/ff5f4955-00cc-43bc-aee5-55712109ce87-kube-api-access-rm7vk\") pod \"etcd-operator-69b85846b6-lw4ks\" (UID: \"ff5f4955-00cc-43bc-aee5-55712109ce87\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lw4ks" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.118002 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c1baa873-3e83-4156-a28a-002a10a6147a-metrics-tls\") pod \"dns-default-x8c42\" (UID: \"c1baa873-3e83-4156-a28a-002a10a6147a\") " pod="openshift-dns/dns-default-x8c42" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.118018 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6368211b-5c56-4570-a4e6-b6cf86b392f2-metrics-certs\") pod \"router-default-68cf44c8b8-4ltpk\" (UID: \"6368211b-5c56-4570-a4e6-b6cf86b392f2\") " pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.118033 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-crjdv\" (UniqueName: \"kubernetes.io/projected/b4cec451-c20b-4fbe-bfc6-cac323ecd942-kube-api-access-crjdv\") pod \"csi-hostpathplugin-24bsp\" (UID: \"b4cec451-c20b-4fbe-bfc6-cac323ecd942\") " pod="hostpath-provisioner/csi-hostpathplugin-24bsp" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.118050 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9644e65-d917-4c28-a428-743979d10f4e-bound-sa-token\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.118065 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b4cec451-c20b-4fbe-bfc6-cac323ecd942-registration-dir\") pod \"csi-hostpathplugin-24bsp\" (UID: \"b4cec451-c20b-4fbe-bfc6-cac323ecd942\") " pod="hostpath-provisioner/csi-hostpathplugin-24bsp" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.118081 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-799w4\" (UniqueName: \"kubernetes.io/projected/c1baa873-3e83-4156-a28a-002a10a6147a-kube-api-access-799w4\") pod \"dns-default-x8c42\" (UID: \"c1baa873-3e83-4156-a28a-002a10a6147a\") " pod="openshift-dns/dns-default-x8c42" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.118103 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b4cec451-c20b-4fbe-bfc6-cac323ecd942-csi-data-dir\") pod \"csi-hostpathplugin-24bsp\" (UID: \"b4cec451-c20b-4fbe-bfc6-cac323ecd942\") " pod="hostpath-provisioner/csi-hostpathplugin-24bsp" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.118123 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xkcd6\" (UniqueName: \"kubernetes.io/projected/f9644e65-d917-4c28-a428-743979d10f4e-kube-api-access-xkcd6\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.118153 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e282ef7a-b9af-4a1f-b4c9-7f9544c4ebcf-srv-cert\") pod \"catalog-operator-75ff9f647d-klv4w\" (UID: \"e282ef7a-b9af-4a1f-b4c9-7f9544c4ebcf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-klv4w" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.118181 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9644e65-d917-4c28-a428-743979d10f4e-trusted-ca\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.118196 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bhkn9\" (UniqueName: \"kubernetes.io/projected/ae539524-1c1e-4e63-b76e-f5f8403e3734-kube-api-access-bhkn9\") pod \"machine-config-server-85dkw\" (UID: \"ae539524-1c1e-4e63-b76e-f5f8403e3734\") " pod="openshift-machine-config-operator/machine-config-server-85dkw" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.118216 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6368211b-5c56-4570-a4e6-b6cf86b392f2-default-certificate\") pod \"router-default-68cf44c8b8-4ltpk\" (UID: \"6368211b-5c56-4570-a4e6-b6cf86b392f2\") " pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.118242 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c7008e5-9282-4238-a23b-67c75f7cc997-serving-cert\") pod \"kube-apiserver-operator-575994946d-gcwx4\" (UID: \"5c7008e5-9282-4238-a23b-67c75f7cc997\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-gcwx4" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.118312 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.118334 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5ss5p\" (UniqueName: \"kubernetes.io/projected/e282ef7a-b9af-4a1f-b4c9-7f9544c4ebcf-kube-api-access-5ss5p\") pod \"catalog-operator-75ff9f647d-klv4w\" (UID: \"e282ef7a-b9af-4a1f-b4c9-7f9544c4ebcf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-klv4w" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.118360 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5c7008e5-9282-4238-a23b-67c75f7cc997-tmp-dir\") pod \"kube-apiserver-operator-575994946d-gcwx4\" (UID: \"5c7008e5-9282-4238-a23b-67c75f7cc997\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-gcwx4" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.118398 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/01d70318-38f6-4dc0-acc4-36458ccf419c-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-wrrss\" (UID: \"01d70318-38f6-4dc0-acc4-36458ccf419c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-wrrss" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.118433 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b4cec451-c20b-4fbe-bfc6-cac323ecd942-socket-dir\") pod \"csi-hostpathplugin-24bsp\" (UID: \"b4cec451-c20b-4fbe-bfc6-cac323ecd942\") " pod="hostpath-provisioner/csi-hostpathplugin-24bsp" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.118454 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c7008e5-9282-4238-a23b-67c75f7cc997-kube-api-access\") pod \"kube-apiserver-operator-575994946d-gcwx4\" (UID: \"5c7008e5-9282-4238-a23b-67c75f7cc997\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-gcwx4" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.118504 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9644e65-d917-4c28-a428-743979d10f4e-registry-certificates\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:21 crc kubenswrapper[5107]: E0220 00:10:21.118664 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:21.618647802 +0000 UTC m=+107.987305368 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.119649 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.119654 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9644e65-d917-4c28-a428-743979d10f4e-registry-certificates\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.119723 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b4cec451-c20b-4fbe-bfc6-cac323ecd942-mountpoint-dir\") pod \"csi-hostpathplugin-24bsp\" (UID: \"b4cec451-c20b-4fbe-bfc6-cac323ecd942\") " pod="hostpath-provisioner/csi-hostpathplugin-24bsp" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.121279 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ff5f4955-00cc-43bc-aee5-55712109ce87-tmp-dir\") pod \"etcd-operator-69b85846b6-lw4ks\" (UID: \"ff5f4955-00cc-43bc-aee5-55712109ce87\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lw4ks" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.121779 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c1baa873-3e83-4156-a28a-002a10a6147a-tmp-dir\") pod \"dns-default-x8c42\" (UID: \"c1baa873-3e83-4156-a28a-002a10a6147a\") " pod="openshift-dns/dns-default-x8c42" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.121915 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b4cec451-c20b-4fbe-bfc6-cac323ecd942-plugins-dir\") pod \"csi-hostpathplugin-24bsp\" (UID: \"b4cec451-c20b-4fbe-bfc6-cac323ecd942\") " pod="hostpath-provisioner/csi-hostpathplugin-24bsp" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.121926 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b4cec451-c20b-4fbe-bfc6-cac323ecd942-registration-dir\") pod \"csi-hostpathplugin-24bsp\" (UID: \"b4cec451-c20b-4fbe-bfc6-cac323ecd942\") " pod="hostpath-provisioner/csi-hostpathplugin-24bsp" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.122379 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5f4955-00cc-43bc-aee5-55712109ce87-config\") pod \"etcd-operator-69b85846b6-lw4ks\" (UID: \"ff5f4955-00cc-43bc-aee5-55712109ce87\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lw4ks" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.125377 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/01d70318-38f6-4dc0-acc4-36458ccf419c-ready\") pod \"cni-sysctl-allowlist-ds-wrrss\" (UID: \"01d70318-38f6-4dc0-acc4-36458ccf419c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-wrrss" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.125978 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b4cec451-c20b-4fbe-bfc6-cac323ecd942-csi-data-dir\") pod \"csi-hostpathplugin-24bsp\" (UID: \"b4cec451-c20b-4fbe-bfc6-cac323ecd942\") " pod="hostpath-provisioner/csi-hostpathplugin-24bsp" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.127777 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e282ef7a-b9af-4a1f-b4c9-7f9544c4ebcf-tmpfs\") pod \"catalog-operator-75ff9f647d-klv4w\" (UID: \"e282ef7a-b9af-4a1f-b4c9-7f9544c4ebcf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-klv4w" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.127835 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7763f2e-cc78-4dd1-a5d8-599e880ed627-audit-policies\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.128786 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b4cec451-c20b-4fbe-bfc6-cac323ecd942-socket-dir\") pod \"csi-hostpathplugin-24bsp\" (UID: \"b4cec451-c20b-4fbe-bfc6-cac323ecd942\") " pod="hostpath-provisioner/csi-hostpathplugin-24bsp" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.129134 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5c7008e5-9282-4238-a23b-67c75f7cc997-tmp-dir\") pod \"kube-apiserver-operator-575994946d-gcwx4\" (UID: \"5c7008e5-9282-4238-a23b-67c75f7cc997\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-gcwx4" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.129590 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/01d70318-38f6-4dc0-acc4-36458ccf419c-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-wrrss\" (UID: \"01d70318-38f6-4dc0-acc4-36458ccf419c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-wrrss" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.129624 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/01d70318-38f6-4dc0-acc4-36458ccf419c-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-wrrss\" (UID: \"01d70318-38f6-4dc0-acc4-36458ccf419c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-wrrss" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.131482 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9644e65-d917-4c28-a428-743979d10f4e-ca-trust-extracted\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.131752 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ff5f4955-00cc-43bc-aee5-55712109ce87-etcd-ca\") pod \"etcd-operator-69b85846b6-lw4ks\" (UID: \"ff5f4955-00cc-43bc-aee5-55712109ce87\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lw4ks" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.131766 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9644e65-d917-4c28-a428-743979d10f4e-trusted-ca\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.131932 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.132546 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e282ef7a-b9af-4a1f-b4c9-7f9544c4ebcf-srv-cert\") pod \"catalog-operator-75ff9f647d-klv4w\" (UID: \"e282ef7a-b9af-4a1f-b4c9-7f9544c4ebcf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-klv4w" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.132637 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7763f2e-cc78-4dd1-a5d8-599e880ed627-audit-dir\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.133097 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.133438 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6368211b-5c56-4570-a4e6-b6cf86b392f2-default-certificate\") pod \"router-default-68cf44c8b8-4ltpk\" (UID: \"6368211b-5c56-4570-a4e6-b6cf86b392f2\") " pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.134465 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ff5f4955-00cc-43bc-aee5-55712109ce87-etcd-service-ca\") pod \"etcd-operator-69b85846b6-lw4ks\" (UID: \"ff5f4955-00cc-43bc-aee5-55712109ce87\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lw4ks" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.134802 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.135086 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff5f4955-00cc-43bc-aee5-55712109ce87-serving-cert\") pod \"etcd-operator-69b85846b6-lw4ks\" (UID: \"ff5f4955-00cc-43bc-aee5-55712109ce87\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lw4ks" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.135378 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9644e65-d917-4c28-a428-743979d10f4e-registry-tls\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.135406 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.136039 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6368211b-5c56-4570-a4e6-b6cf86b392f2-service-ca-bundle\") pod \"router-default-68cf44c8b8-4ltpk\" (UID: \"6368211b-5c56-4570-a4e6-b6cf86b392f2\") " pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.136422 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c7008e5-9282-4238-a23b-67c75f7cc997-config\") pod \"kube-apiserver-operator-575994946d-gcwx4\" (UID: \"5c7008e5-9282-4238-a23b-67c75f7cc997\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-gcwx4" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.136721 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1baa873-3e83-4156-a28a-002a10a6147a-config-volume\") pod \"dns-default-x8c42\" (UID: \"c1baa873-3e83-4156-a28a-002a10a6147a\") " pod="openshift-dns/dns-default-x8c42" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.138016 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.138212 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.138419 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.138530 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ae539524-1c1e-4e63-b76e-f5f8403e3734-node-bootstrap-token\") pod \"machine-config-server-85dkw\" (UID: \"ae539524-1c1e-4e63-b76e-f5f8403e3734\") " pod="openshift-machine-config-operator/machine-config-server-85dkw" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.138588 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6368211b-5c56-4570-a4e6-b6cf86b392f2-stats-auth\") pod \"router-default-68cf44c8b8-4ltpk\" (UID: \"6368211b-5c56-4570-a4e6-b6cf86b392f2\") " pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.138692 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c1baa873-3e83-4156-a28a-002a10a6147a-metrics-tls\") pod \"dns-default-x8c42\" (UID: \"c1baa873-3e83-4156-a28a-002a10a6147a\") " pod="openshift-dns/dns-default-x8c42" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.138885 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ae539524-1c1e-4e63-b76e-f5f8403e3734-certs\") pod \"machine-config-server-85dkw\" (UID: \"ae539524-1c1e-4e63-b76e-f5f8403e3734\") " pod="openshift-machine-config-operator/machine-config-server-85dkw" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.139314 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.139650 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.139665 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9644e65-d917-4c28-a428-743979d10f4e-installation-pull-secrets\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.140137 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6368211b-5c56-4570-a4e6-b6cf86b392f2-metrics-certs\") pod \"router-default-68cf44c8b8-4ltpk\" (UID: \"6368211b-5c56-4570-a4e6-b6cf86b392f2\") " pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.140498 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c7008e5-9282-4238-a23b-67c75f7cc997-serving-cert\") pod \"kube-apiserver-operator-575994946d-gcwx4\" (UID: \"5c7008e5-9282-4238-a23b-67c75f7cc997\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-gcwx4" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.140931 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ff5f4955-00cc-43bc-aee5-55712109ce87-etcd-client\") pod \"etcd-operator-69b85846b6-lw4ks\" (UID: \"ff5f4955-00cc-43bc-aee5-55712109ce87\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lw4ks" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.143130 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.143703 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e282ef7a-b9af-4a1f-b4c9-7f9544c4ebcf-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-klv4w\" (UID: \"e282ef7a-b9af-4a1f-b4c9-7f9544c4ebcf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-klv4w" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.161878 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mjg8\" (UniqueName: \"kubernetes.io/projected/01d70318-38f6-4dc0-acc4-36458ccf419c-kube-api-access-6mjg8\") pod \"cni-sysctl-allowlist-ds-wrrss\" (UID: \"01d70318-38f6-4dc0-acc4-36458ccf419c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-wrrss" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.201031 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-799w4\" (UniqueName: \"kubernetes.io/projected/c1baa873-3e83-4156-a28a-002a10a6147a-kube-api-access-799w4\") pod \"dns-default-x8c42\" (UID: \"c1baa873-3e83-4156-a28a-002a10a6147a\") " pod="openshift-dns/dns-default-x8c42" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.212382 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-p7fkg" event={"ID":"cfdfef0c-4111-4a89-aa5a-bf317fc4a772","Type":"ContainerStarted","Data":"92a94e13e514c2b6978f82f09deb0614b883935c8605463abe9c1b808879a287"} Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.213821 5107 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-szvd6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.213876 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-szvd6" podUID="c4f6c375-0a3f-4a66-908a-ac8180dba919" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.214436 5107 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-stc6t container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.214464 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-stc6t" podUID="38c88f45-4bc8-4153-962b-f3449bbb53ad" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.219641 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:21 crc kubenswrapper[5107]: E0220 00:10:21.220183 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:21.720166144 +0000 UTC m=+108.088823700 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.238298 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhkn9\" (UniqueName: \"kubernetes.io/projected/ae539524-1c1e-4e63-b76e-f5f8403e3734-kube-api-access-bhkn9\") pod \"machine-config-server-85dkw\" (UID: \"ae539524-1c1e-4e63-b76e-f5f8403e3734\") " pod="openshift-machine-config-operator/machine-config-server-85dkw" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.259027 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkcd6\" (UniqueName: \"kubernetes.io/projected/f9644e65-d917-4c28-a428-743979d10f4e-kube-api-access-xkcd6\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.269321 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-f44cp"] Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.271587 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-zg2vd"] Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.277756 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c7008e5-9282-4238-a23b-67c75f7cc997-kube-api-access\") pod \"kube-apiserver-operator-575994946d-gcwx4\" (UID: \"5c7008e5-9282-4238-a23b-67c75f7cc997\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-gcwx4" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.284738 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c8wx\" (UniqueName: \"kubernetes.io/projected/b7763f2e-cc78-4dd1-a5d8-599e880ed627-kube-api-access-6c8wx\") pod \"oauth-openshift-66458b6674-lp56s\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.317835 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9644e65-d917-4c28-a428-743979d10f4e-bound-sa-token\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.318388 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-crjdv\" (UniqueName: \"kubernetes.io/projected/b4cec451-c20b-4fbe-bfc6-cac323ecd942-kube-api-access-crjdv\") pod \"csi-hostpathplugin-24bsp\" (UID: \"b4cec451-c20b-4fbe-bfc6-cac323ecd942\") " pod="hostpath-provisioner/csi-hostpathplugin-24bsp" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.319231 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-gcwx4" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.320795 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.323468 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:21 crc kubenswrapper[5107]: E0220 00:10:21.323642 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:21.823597188 +0000 UTC m=+108.192254764 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.345090 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-x8c42" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.352014 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ss5p\" (UniqueName: \"kubernetes.io/projected/e282ef7a-b9af-4a1f-b4c9-7f9544c4ebcf-kube-api-access-5ss5p\") pod \"catalog-operator-75ff9f647d-klv4w\" (UID: \"e282ef7a-b9af-4a1f-b4c9-7f9544c4ebcf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-klv4w" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.352397 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-85dkw" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.364466 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnw4s\" (UniqueName: \"kubernetes.io/projected/6368211b-5c56-4570-a4e6-b6cf86b392f2-kube-api-access-tnw4s\") pod \"router-default-68cf44c8b8-4ltpk\" (UID: \"6368211b-5c56-4570-a4e6-b6cf86b392f2\") " pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.365687 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-24bsp" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.376414 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-wrrss" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.378598 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm7vk\" (UniqueName: \"kubernetes.io/projected/ff5f4955-00cc-43bc-aee5-55712109ce87-kube-api-access-rm7vk\") pod \"etcd-operator-69b85846b6-lw4ks\" (UID: \"ff5f4955-00cc-43bc-aee5-55712109ce87\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lw4ks" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.405089 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s72zb\" (UniqueName: \"kubernetes.io/projected/b2556f1d-c6f7-47d6-adf8-b2e5fd522346-kube-api-access-s72zb\") pod \"migrator-866fcbc849-wkzt7\" (UID: \"b2556f1d-c6f7-47d6-adf8-b2e5fd522346\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-wkzt7" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.424248 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.424639 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-rq6pw"] Feb 20 00:10:21 crc kubenswrapper[5107]: E0220 00:10:21.424696 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:21.924677247 +0000 UTC m=+108.293334803 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.430218 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-579ws"] Feb 20 00:10:21 crc kubenswrapper[5107]: W0220 00:10:21.495578 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88f0f58c_020c_4ce1_9f3d_3aa63ae92ddd.slice/crio-6e438bffb9dbc4cc98e1dc589afbcb99412b71385a030c33abac7024c916f6de WatchSource:0}: Error finding container 6e438bffb9dbc4cc98e1dc589afbcb99412b71385a030c33abac7024c916f6de: Status 404 returned error can't find the container with id 6e438bffb9dbc4cc98e1dc589afbcb99412b71385a030c33abac7024c916f6de Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.525717 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:21 crc kubenswrapper[5107]: E0220 00:10:21.525765 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:22.025743656 +0000 UTC m=+108.394401222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.526009 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:21 crc kubenswrapper[5107]: E0220 00:10:21.526525 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:22.026513057 +0000 UTC m=+108.395170623 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:21 crc kubenswrapper[5107]: W0220 00:10:21.565066 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae539524_1c1e_4e63_b76e_f5f8403e3734.slice/crio-b86cd1aac74387634e73d510560a7f4ec140e8b8f18f211c0c062442694d7cb3 WatchSource:0}: Error finding container b86cd1aac74387634e73d510560a7f4ec140e8b8f18f211c0c062442694d7cb3: Status 404 returned error can't find the container with id b86cd1aac74387634e73d510560a7f4ec140e8b8f18f211c0c062442694d7cb3 Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.598592 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-klv4w" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.603675 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.628113 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:21 crc kubenswrapper[5107]: E0220 00:10:21.628960 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:22.128572104 +0000 UTC m=+108.497229670 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.630330 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-wkzt7" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.640917 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-lw4ks" Feb 20 00:10:21 crc kubenswrapper[5107]: W0220 00:10:21.715793 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6368211b_5c56_4570_a4e6_b6cf86b392f2.slice/crio-c336b80051d764158e8a5baf229de20207e307387d57c1e10bb8e49e114036be WatchSource:0}: Error finding container c336b80051d764158e8a5baf229de20207e307387d57c1e10bb8e49e114036be: Status 404 returned error can't find the container with id c336b80051d764158e8a5baf229de20207e307387d57c1e10bb8e49e114036be Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.731425 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:21 crc kubenswrapper[5107]: E0220 00:10:21.731880 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:22.231861404 +0000 UTC m=+108.600518970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.833186 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:21 crc kubenswrapper[5107]: E0220 00:10:21.833435 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:22.333390256 +0000 UTC m=+108.702047822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.833620 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:21 crc kubenswrapper[5107]: E0220 00:10:21.834247 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:22.334236839 +0000 UTC m=+108.702894405 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.890833 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-szvd6" podStartSLOduration=84.890809002 podStartE2EDuration="1m24.890809002s" podCreationTimestamp="2026-02-20 00:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:21.877234574 +0000 UTC m=+108.245892140" watchObservedRunningTime="2026-02-20 00:10:21.890809002 +0000 UTC m=+108.259466568" Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.893606 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tcpmn"] Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.936483 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:21 crc kubenswrapper[5107]: E0220 00:10:21.941125 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:22.441105039 +0000 UTC m=+108.809762605 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.971475 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-c9dcq"] Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.976691 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-cvxll"] Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.981951 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-4sdjl"] Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.990789 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-5wdfs"] Feb 20 00:10:21 crc kubenswrapper[5107]: I0220 00:10:21.996743 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mr4hv"] Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:21.999711 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-kv84j"] Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.001900 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-sx776"] Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.010931 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-dp7hb"] Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.013264 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-ph8fq"] Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.014832 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-b778b"] Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.016779 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-m4t5z"] Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.017640 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525760-gk87r"] Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.019906 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-pv5z4"] Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.020763 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-c42nb"] Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.023513 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-26g2b"] Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.042369 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:22 crc kubenswrapper[5107]: E0220 00:10:22.042743 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:22.542732164 +0000 UTC m=+108.911389730 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.115037 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5tc5g" podStartSLOduration=85.115025413 podStartE2EDuration="1m25.115025413s" podCreationTimestamp="2026-02-20 00:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:22.114358584 +0000 UTC m=+108.483016150" watchObservedRunningTime="2026-02-20 00:10:22.115025413 +0000 UTC m=+108.483682979" Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.115709 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rd2fd" podStartSLOduration=86.115703702 podStartE2EDuration="1m26.115703702s" podCreationTimestamp="2026-02-20 00:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:22.068000036 +0000 UTC m=+108.436657602" watchObservedRunningTime="2026-02-20 00:10:22.115703702 +0000 UTC m=+108.484361258" Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.145931 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:22 crc kubenswrapper[5107]: E0220 00:10:22.147731 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:22.647705591 +0000 UTC m=+109.016363157 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.151567 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-lp56s"] Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.178824 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-gcwx4"] Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.205049 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-x8c42"] Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.215401 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-kcbw2"] Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.221507 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-24bsp"] Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.248371 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:22 crc kubenswrapper[5107]: E0220 00:10:22.248960 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:22.748947295 +0000 UTC m=+109.117604861 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.256595 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-nksz9" podStartSLOduration=85.256578327 podStartE2EDuration="1m25.256578327s" podCreationTimestamp="2026-02-20 00:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:22.234835352 +0000 UTC m=+108.603492918" watchObservedRunningTime="2026-02-20 00:10:22.256578327 +0000 UTC m=+108.625235893" Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.259558 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-f44cp" event={"ID":"2aaa38bf-6f3d-4f28-8634-564d553f87a6","Type":"ContainerStarted","Data":"7630e890fbe7d3187ebcb4decb1c8ae257a9d456ab9ca219c603cafbab2c6963"} Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.259609 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-f44cp" event={"ID":"2aaa38bf-6f3d-4f28-8634-564d553f87a6","Type":"ContainerStarted","Data":"20d43dc9480dd3fd381b978a6a3954da88f590ba6083b104930196e56055db17"} Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.261352 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-26g2b" event={"ID":"e0875e0e-3239-42ce-b8d5-aecba1e04f68","Type":"ContainerStarted","Data":"115078960e0cb40dfb272a3b5b0dda0293002207ca2832905eb3213bd0c56969"} Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.265118 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-rq6pw" event={"ID":"dccf73a9-9c9e-4211-8b18-ed7d205bf9d1","Type":"ContainerStarted","Data":"f9f11ceddf1c268307e499d463a1b4f2a835ccc208b15501e0a8f10bfd7269a7"} Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.273374 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-stc6t" podStartSLOduration=85.273351783 podStartE2EDuration="1m25.273351783s" podCreationTimestamp="2026-02-20 00:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:22.269462245 +0000 UTC m=+108.638119801" watchObservedRunningTime="2026-02-20 00:10:22.273351783 +0000 UTC m=+108.642009349" Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.282344 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" event={"ID":"4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c","Type":"ContainerStarted","Data":"ba88ad1b9b2308e922b090f67533edb65de006afab3e133ada2ae5a534b04993"} Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.291192 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-klv4w"] Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.306184 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" event={"ID":"6368211b-5c56-4570-a4e6-b6cf86b392f2","Type":"ContainerStarted","Data":"c336b80051d764158e8a5baf229de20207e307387d57c1e10bb8e49e114036be"} Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.308891 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-dp7hb" event={"ID":"79076589-1a80-4683-a090-5aa445b6eba8","Type":"ContainerStarted","Data":"79b676fb250f1a127a7385ea1c86be8d9d15f9580e5ce7737f416fcd333ed90c"} Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.322339 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-kv84j" event={"ID":"f052856a-22ab-4525-92dc-3baef7ed956a","Type":"ContainerStarted","Data":"80e232ae7b703139b0369ff76b8177d1129bd92c1da61f4d30aa249c36a5a3de"} Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.334068 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-kgrwk" event={"ID":"3fabeb78-54c7-4333-af95-7722e3cfffb9","Type":"ContainerStarted","Data":"fdfcbb9d382edc4a32e52420ad8a9e3b4c9f617035ba9a36a8eba47af5ee5efb"} Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.337003 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-m4t5z" event={"ID":"a78d5238-801b-4521-91d2-6b9bed68d61e","Type":"ContainerStarted","Data":"b00e45937801063d7fa0150fce4b29f25e2420f912952dbe462f3713d202d783"} Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.339195 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-4sdjl" event={"ID":"20c92049-0ab0-4940-8a29-851dfe180b34","Type":"ContainerStarted","Data":"5d312bec03cd3765825d1ff98c2a5ea129095b98c6958d98ec706c8208053944"} Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.346520 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-zg2vd" event={"ID":"b78a9450-1db5-496b-a8e7-2c12d8e5525f","Type":"ContainerStarted","Data":"54907ba7e6f93ca1cceed990201725651d1e987abd3edb41e996b77a0e8cb9c4"} Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.348654 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-kdptd"] Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.355418 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zn62" event={"ID":"01658e78-84f0-4da9-8175-eea829ce2c41","Type":"ContainerStarted","Data":"8e8f318ab06082d29dda51681b5ee4f17332bc2d47ac1a0830257797af5365ad"} Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.358308 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-wrrss" event={"ID":"01d70318-38f6-4dc0-acc4-36458ccf419c","Type":"ContainerStarted","Data":"73281017a853c55f5598955d52483b356bdde4e56f3b1b202da5bc72b447b014"} Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.358916 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.361042 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-85dkw" event={"ID":"ae539524-1c1e-4e63-b76e-f5f8403e3734","Type":"ContainerStarted","Data":"b86cd1aac74387634e73d510560a7f4ec140e8b8f18f211c0c062442694d7cb3"} Feb 20 00:10:22 crc kubenswrapper[5107]: E0220 00:10:22.362776 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:22.862737207 +0000 UTC m=+109.231394773 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.370195 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525760-gk87r" event={"ID":"da6b7a25-5740-4b62-ab8d-dd83057a3d7a","Type":"ContainerStarted","Data":"c0f7ea4eab1b8be04af0c94e3c6151e151c1267d6cc8ccc877b6f9bf937c351a"} Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.385409 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-c42nb" event={"ID":"1d9c615f-40e1-433c-9607-fbd841b62901","Type":"ContainerStarted","Data":"524268db176261d4ea5f5925bb4169cf2f570fbd3a764cc9f6b099b6128c14ed"} Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.398866 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-p7fkg" event={"ID":"cfdfef0c-4111-4a89-aa5a-bf317fc4a772","Type":"ContainerStarted","Data":"9dec217f5c2574df514b53c4d3bef1fc0a9105b509761cd6af3018cda4402ac5"} Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.402891 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29525760-tv4jx" podStartSLOduration=86.402848802 podStartE2EDuration="1m26.402848802s" podCreationTimestamp="2026-02-20 00:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:22.392544415 +0000 UTC m=+108.761201991" watchObservedRunningTime="2026-02-20 00:10:22.402848802 +0000 UTC m=+108.771506388" Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.403454 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-wkzt7"] Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.404398 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-cvxll" event={"ID":"9833f13d-3814-43ad-afef-381d884e5950","Type":"ContainerStarted","Data":"6b67bc5e444b39a355618aefdb1560c73a260c1c7fddf2f229dec6d12d4c39c2"} Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.411162 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-lw4ks"] Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.421175 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-579ws" event={"ID":"88f0f58c-020c-4ce1-9f3d-3aa63ae92ddd","Type":"ContainerStarted","Data":"6e438bffb9dbc4cc98e1dc589afbcb99412b71385a030c33abac7024c916f6de"} Feb 20 00:10:22 crc kubenswrapper[5107]: W0220 00:10:22.431935 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff5f4955_00cc_43bc_aee5_55712109ce87.slice/crio-3575bbb98f748a802fb4b10f9279e3d233abfc5e475832d2e378bf7690b5918a WatchSource:0}: Error finding container 3575bbb98f748a802fb4b10f9279e3d233abfc5e475832d2e378bf7690b5918a: Status 404 returned error can't find the container with id 3575bbb98f748a802fb4b10f9279e3d233abfc5e475832d2e378bf7690b5918a Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.440903 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-5wdfs" event={"ID":"84d08a6b-ab51-44fb-a216-4a9beb9e9141","Type":"ContainerStarted","Data":"f55d059698531df20be2b9266efcd95b1085d19dba3b1b217ba1a25d2327db9a"} Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.441299 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-kgrwk" Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.442593 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c9dcq" event={"ID":"b3931e83-9df1-49f4-8f33-5ca09792a062","Type":"ContainerStarted","Data":"62bfdbdb1aadcdd164b7f3a25a2aa840dc9bdd8d42483653863d949938191ceb"} Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.447398 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-sx776" event={"ID":"bc3238c4-513a-495d-835d-da98864cdb8d","Type":"ContainerStarted","Data":"747f5d5f5e8d378bf75d51bfba4cda88e9aee0bd64cb00783e93697d26e6a79b"} Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.448320 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mr4hv" event={"ID":"6fe8d69b-d257-4f34-b535-177002797675","Type":"ContainerStarted","Data":"62de14828a84aefd6800c54b2103554297760cc3962ad17995eb18a5ea1f3be3"} Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.449622 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-b778b" event={"ID":"933616c4-9cd3-4c88-8863-111d8e2ec32b","Type":"ContainerStarted","Data":"b9401a8bc9fba38a0c6e9a97c02cd7291de7180ac7e9695f6147f763091acafb"} Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.451000 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-pv5z4" event={"ID":"a0f08f8e-efec-4c49-b5e8-7ac2b96bb5ee","Type":"ContainerStarted","Data":"a6d2fa988f8b20dcceaa04df61edc893e5095a186b92e4fc6f718ee431598869"} Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.452686 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tcpmn" event={"ID":"6623692b-4959-4045-8da6-f64819b323e9","Type":"ContainerStarted","Data":"8824239577dd7a54d08be3779fa1f491aa4744a8b37d4727147c430f92471cf4"} Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.460890 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:22 crc kubenswrapper[5107]: E0220 00:10:22.462408 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:22.962393657 +0000 UTC m=+109.331051223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.530577 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-p7fkg" Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.532774 5107 patch_prober.go:28] interesting pod/downloads-747b44746d-p7fkg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.532983 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-p7fkg" podUID="cfdfef0c-4111-4a89-aa5a-bf317fc4a772" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.534936 5107 ???:1] "http: TLS handshake error from 192.168.126.11:53128: no serving certificate available for the kubelet" Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.561897 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:22 crc kubenswrapper[5107]: E0220 00:10:22.562084 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:23.062055856 +0000 UTC m=+109.430713442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.562422 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:22 crc kubenswrapper[5107]: E0220 00:10:22.562798 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:23.062786717 +0000 UTC m=+109.431444283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.630867 5107 ???:1] "http: TLS handshake error from 192.168.126.11:53132: no serving certificate available for the kubelet" Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.667639 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:22 crc kubenswrapper[5107]: E0220 00:10:22.667742 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:23.167726033 +0000 UTC m=+109.536383599 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.668809 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:22 crc kubenswrapper[5107]: E0220 00:10:22.669296 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:23.169287916 +0000 UTC m=+109.537945482 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.730020 5107 ???:1] "http: TLS handshake error from 192.168.126.11:53144: no serving certificate available for the kubelet" Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.769766 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:22 crc kubenswrapper[5107]: E0220 00:10:22.770227 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:23.270208991 +0000 UTC m=+109.638866557 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.834978 5107 ???:1] "http: TLS handshake error from 192.168.126.11:53156: no serving certificate available for the kubelet" Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.866254 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-k2bhq" podStartSLOduration=86.866236069 podStartE2EDuration="1m26.866236069s" podCreationTimestamp="2026-02-20 00:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:22.86449158 +0000 UTC m=+109.233149146" watchObservedRunningTime="2026-02-20 00:10:22.866236069 +0000 UTC m=+109.234893635" Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.871905 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:22 crc kubenswrapper[5107]: E0220 00:10:22.872511 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:23.372495723 +0000 UTC m=+109.741153299 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.934570 5107 ???:1] "http: TLS handshake error from 192.168.126.11:53168: no serving certificate available for the kubelet" Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.973192 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:22 crc kubenswrapper[5107]: E0220 00:10:22.973391 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:23.473359256 +0000 UTC m=+109.842016812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:22 crc kubenswrapper[5107]: I0220 00:10:22.973716 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:22 crc kubenswrapper[5107]: E0220 00:10:22.974033 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:23.474025354 +0000 UTC m=+109.842682920 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.031462 5107 ???:1] "http: TLS handshake error from 192.168.126.11:53176: no serving certificate available for the kubelet" Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.075185 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:23 crc kubenswrapper[5107]: E0220 00:10:23.075634 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:23.575609988 +0000 UTC m=+109.944267564 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.131168 5107 ???:1] "http: TLS handshake error from 192.168.126.11:53182: no serving certificate available for the kubelet" Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.177449 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:23 crc kubenswrapper[5107]: E0220 00:10:23.178003 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:23.677968822 +0000 UTC m=+110.046626388 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.232131 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zn62" podStartSLOduration=86.232108667 podStartE2EDuration="1m26.232108667s" podCreationTimestamp="2026-02-20 00:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:23.199292575 +0000 UTC m=+109.567950181" watchObservedRunningTime="2026-02-20 00:10:23.232108667 +0000 UTC m=+109.600766233" Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.232478 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-kgrwk" podStartSLOduration=87.232471677 podStartE2EDuration="1m27.232471677s" podCreationTimestamp="2026-02-20 00:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:23.230124942 +0000 UTC m=+109.598782508" watchObservedRunningTime="2026-02-20 00:10:23.232471677 +0000 UTC m=+109.601129243" Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.235863 5107 ???:1] "http: TLS handshake error from 192.168.126.11:53196: no serving certificate available for the kubelet" Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.271466 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-p7fkg" podStartSLOduration=87.27143111 podStartE2EDuration="1m27.27143111s" podCreationTimestamp="2026-02-20 00:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:23.267575092 +0000 UTC m=+109.636232658" watchObservedRunningTime="2026-02-20 00:10:23.27143111 +0000 UTC m=+109.640088696" Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.278306 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:23 crc kubenswrapper[5107]: E0220 00:10:23.279086 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:23.779062802 +0000 UTC m=+110.147720368 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.382895 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:23 crc kubenswrapper[5107]: E0220 00:10:23.383290 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:23.883270988 +0000 UTC m=+110.251928554 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.462803 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" event={"ID":"b7763f2e-cc78-4dd1-a5d8-599e880ed627","Type":"ContainerStarted","Data":"51e6f06eeeaaa595acc8bbf33fe8264a5899a7962776f1c025de7c3ae71f6575"} Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.464170 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-26g2b" event={"ID":"e0875e0e-3239-42ce-b8d5-aecba1e04f68","Type":"ContainerStarted","Data":"54faceabcab72d5350e6cd1a57634bf9a01dfb18beb46436df5172d0b4d81767"} Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.465099 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-rq6pw" event={"ID":"dccf73a9-9c9e-4211-8b18-ed7d205bf9d1","Type":"ContainerStarted","Data":"5ee8900a66c07906945e29dfcec61b70cad3d76f85d64ac286ad5345db541641"} Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.466782 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" event={"ID":"4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c","Type":"ContainerStarted","Data":"f758e4a581aef747b13b507614cbca5be80cd9cde3eaaad465f1f6a306ddb3ec"} Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.467682 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-24bsp" event={"ID":"b4cec451-c20b-4fbe-bfc6-cac323ecd942","Type":"ContainerStarted","Data":"fbee474bc840f1853b5d7701d196a5163063bd38c09266733f3894589bd9be01"} Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.469273 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-dp7hb" event={"ID":"79076589-1a80-4683-a090-5aa445b6eba8","Type":"ContainerStarted","Data":"6a732ce5cdbb0ce4df8df229dc46e1e248597415bbbbeadb007f65c90e23e22b"} Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.470288 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-m4t5z" event={"ID":"a78d5238-801b-4521-91d2-6b9bed68d61e","Type":"ContainerStarted","Data":"cb25c6ee42f7e153563443c1112946504c95f900235f3529e624dbb1b110a79f"} Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.471620 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-zg2vd" event={"ID":"b78a9450-1db5-496b-a8e7-2c12d8e5525f","Type":"ContainerStarted","Data":"fc3d91b4ad2c51c37ad2c85058e10aca3fc318041b3a758f1a6f8d8276daff91"} Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.473508 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-klv4w" event={"ID":"e282ef7a-b9af-4a1f-b4c9-7f9544c4ebcf","Type":"ContainerStarted","Data":"6b62b277f012d9aa20ae07808be53afd540f56c11d764223715d6fb713e22ae7"} Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.477325 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-gcwx4" event={"ID":"5c7008e5-9282-4238-a23b-67c75f7cc997","Type":"ContainerStarted","Data":"c5bd6ea1f13803bb44d4462e3af41d8832d131ae31a5c31ed9fe616b2558e3f1"} Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.478496 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-kcbw2" event={"ID":"ec63a06b-520e-47da-b761-74cc3462ebd7","Type":"ContainerStarted","Data":"825ef5df71916a5f4ca8b78b452144b8cf818e39607afb951ec50e6c58c6e68d"} Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.479579 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-kdptd" event={"ID":"55527a1c-71b2-4254-82ac-da17df407862","Type":"ContainerStarted","Data":"95073b4654cc5a8a0b87c91788a5ce2fcc4dd06bdb3f3670a2f739cc0b5f42e8"} Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.481317 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-85dkw" event={"ID":"ae539524-1c1e-4e63-b76e-f5f8403e3734","Type":"ContainerStarted","Data":"889dc3b17ab574188038566df3095d10987117345e3dd9cab42dc609615387c7"} Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.482877 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-5wdfs" event={"ID":"84d08a6b-ab51-44fb-a216-4a9beb9e9141","Type":"ContainerStarted","Data":"7e545b9f40b43a700d77982ae0e3e705909ee4ba5861cbaef384e36459e40a3a"} Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.484008 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:23 crc kubenswrapper[5107]: E0220 00:10:23.484180 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:23.984136361 +0000 UTC m=+110.352793917 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.484474 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ph8fq" event={"ID":"9332ca24-50c6-4625-8e97-a6fd5dd849f3","Type":"ContainerStarted","Data":"f22fcb5298ffb4681ee1a61202eb57987f774511c5644fb0d10eeb5f672c962b"} Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.484505 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:23 crc kubenswrapper[5107]: E0220 00:10:23.484878 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:23.984867451 +0000 UTC m=+110.353525007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.487839 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-cvxll" event={"ID":"9833f13d-3814-43ad-afef-381d884e5950","Type":"ContainerStarted","Data":"3e90a833ccabef84d6f8a414df95fab479752a61eaa0f099c7b823171c0aa9d8"} Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.498670 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-579ws" event={"ID":"88f0f58c-020c-4ce1-9f3d-3aa63ae92ddd","Type":"ContainerStarted","Data":"6d1be9cfc0722a9923b0375dc6be160d4997ad4cf23b6912ca4eaeb28fd9dde8"} Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.500698 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-b778b" event={"ID":"933616c4-9cd3-4c88-8863-111d8e2ec32b","Type":"ContainerStarted","Data":"70e75dcc5c60c818af05cf9d1e672515e9f5c4a4d8444e3acaa90cf8afc07675"} Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.509622 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-wkzt7" event={"ID":"b2556f1d-c6f7-47d6-adf8-b2e5fd522346","Type":"ContainerStarted","Data":"0ee613587ce0fb4f2205592d88396672d7d2b14b26d117aeda4081236d381bf0"} Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.509650 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-wkzt7" event={"ID":"b2556f1d-c6f7-47d6-adf8-b2e5fd522346","Type":"ContainerStarted","Data":"4efb9e571f64d693b1d6bb475e2ae5fbf1f2474b94e3ae407df7cc06167db71f"} Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.512316 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-x8c42" event={"ID":"c1baa873-3e83-4156-a28a-002a10a6147a","Type":"ContainerStarted","Data":"6997e74a42dc897ca5b148cbb817d5c969ed66363de26a93844a3c821d32d536"} Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.518576 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-f44cp" podStartSLOduration=87.518556427 podStartE2EDuration="1m27.518556427s" podCreationTimestamp="2026-02-20 00:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:23.312855901 +0000 UTC m=+109.681513517" watchObservedRunningTime="2026-02-20 00:10:23.518556427 +0000 UTC m=+109.887213993" Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.520203 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-85dkw" podStartSLOduration=6.520194163 podStartE2EDuration="6.520194163s" podCreationTimestamp="2026-02-20 00:10:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:23.518180677 +0000 UTC m=+109.886838243" watchObservedRunningTime="2026-02-20 00:10:23.520194163 +0000 UTC m=+109.888851729" Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.526153 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" event={"ID":"6368211b-5c56-4570-a4e6-b6cf86b392f2","Type":"ContainerStarted","Data":"17c79f216e7262184af36a6eda0de713fbcd5358374b3aa417b5c738fb5b622d"} Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.535170 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-4sdjl" event={"ID":"20c92049-0ab0-4940-8a29-851dfe180b34","Type":"ContainerStarted","Data":"3b66d12af6b48ca96542b2a19b6171b30a650377e37a28f55f2e0c8eb33e6a32"} Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.538941 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-cvxll" podStartSLOduration=86.538923433 podStartE2EDuration="1m26.538923433s" podCreationTimestamp="2026-02-20 00:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:23.538851971 +0000 UTC m=+109.907509538" watchObservedRunningTime="2026-02-20 00:10:23.538923433 +0000 UTC m=+109.907580999" Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.540536 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-wrrss" event={"ID":"01d70318-38f6-4dc0-acc4-36458ccf419c","Type":"ContainerStarted","Data":"b9e64a7933b7566bee3689cad101d004830ea2f6dc864c2ef830802020aef0cf"} Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.544079 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525760-gk87r" event={"ID":"da6b7a25-5740-4b62-ab8d-dd83057a3d7a","Type":"ContainerStarted","Data":"cbc9fbf7ec513a01bfb2754f7dc81e7312bdd298c83dc179f9aa1216e3ba0fc3"} Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.546054 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-lw4ks" event={"ID":"ff5f4955-00cc-43bc-aee5-55712109ce87","Type":"ContainerStarted","Data":"3575bbb98f748a802fb4b10f9279e3d233abfc5e475832d2e378bf7690b5918a"} Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.552392 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-dp7hb" Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.558342 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-pv5z4" event={"ID":"a0f08f8e-efec-4c49-b5e8-7ac2b96bb5ee","Type":"ContainerStarted","Data":"acd3799133d7e2f781939a68d15a5c5e6df7787c216b8afd2be1688d6bef898f"} Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.559953 5107 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-dp7hb container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" start-of-body= Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.560070 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-dp7hb" podUID="79076589-1a80-4683-a090-5aa445b6eba8" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.562457 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-rq6pw" podStartSLOduration=87.562442357 podStartE2EDuration="1m27.562442357s" podCreationTimestamp="2026-02-20 00:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:23.559611608 +0000 UTC m=+109.928269174" watchObservedRunningTime="2026-02-20 00:10:23.562442357 +0000 UTC m=+109.931099923" Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.578413 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tcpmn" event={"ID":"6623692b-4959-4045-8da6-f64819b323e9","Type":"ContainerStarted","Data":"3c2ee10c7e9307d6771c98680b1119f31bad14f6e2d5e804593f155f8b9689c0"} Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.588507 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-pv5z4" podStartSLOduration=86.588480351 podStartE2EDuration="1m26.588480351s" podCreationTimestamp="2026-02-20 00:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:23.581938369 +0000 UTC m=+109.950595935" watchObservedRunningTime="2026-02-20 00:10:23.588480351 +0000 UTC m=+109.957137907" Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.588715 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:23 crc kubenswrapper[5107]: E0220 00:10:23.588841 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:24.08882389 +0000 UTC m=+110.457481446 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:23 crc kubenswrapper[5107]: E0220 00:10:23.590219 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:24.090200449 +0000 UTC m=+110.458858025 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.589213 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.607605 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-dp7hb" podStartSLOduration=86.607579472 podStartE2EDuration="1m26.607579472s" podCreationTimestamp="2026-02-20 00:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:23.604486366 +0000 UTC m=+109.973143952" watchObservedRunningTime="2026-02-20 00:10:23.607579472 +0000 UTC m=+109.976237038" Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.626334 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zn62" Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.628272 5107 patch_prober.go:28] interesting pod/apiserver-8596bd845d-8zn62 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.10:8443/livez\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.628497 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zn62" podUID="01658e78-84f0-4da9-8175-eea829ce2c41" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.10:8443/livez\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.629719 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zn62" Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.637608 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-b778b" podStartSLOduration=86.637568945 podStartE2EDuration="1m26.637568945s" podCreationTimestamp="2026-02-20 00:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:23.624048209 +0000 UTC m=+109.992705795" watchObservedRunningTime="2026-02-20 00:10:23.637568945 +0000 UTC m=+110.006226511" Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.648173 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-5wdfs" podStartSLOduration=86.648135979 podStartE2EDuration="1m26.648135979s" podCreationTimestamp="2026-02-20 00:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:23.644876098 +0000 UTC m=+110.013533664" watchObservedRunningTime="2026-02-20 00:10:23.648135979 +0000 UTC m=+110.016793545" Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.697454 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:23 crc kubenswrapper[5107]: E0220 00:10:23.697556 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:24.197516731 +0000 UTC m=+110.566174297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.697747 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:23 crc kubenswrapper[5107]: E0220 00:10:23.698158 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:24.198132188 +0000 UTC m=+110.566789754 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.789131 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" podStartSLOduration=86.789109446 podStartE2EDuration="1m26.789109446s" podCreationTimestamp="2026-02-20 00:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:23.786879425 +0000 UTC m=+110.155536981" watchObservedRunningTime="2026-02-20 00:10:23.789109446 +0000 UTC m=+110.157767012" Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.798588 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:23 crc kubenswrapper[5107]: E0220 00:10:23.798761 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:24.298729434 +0000 UTC m=+110.667387000 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.799699 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:23 crc kubenswrapper[5107]: E0220 00:10:23.800109 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:24.300091872 +0000 UTC m=+110.668749528 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.845020 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29525760-gk87r" podStartSLOduration=87.84500053 podStartE2EDuration="1m27.84500053s" podCreationTimestamp="2026-02-20 00:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:23.843251181 +0000 UTC m=+110.211908767" watchObservedRunningTime="2026-02-20 00:10:23.84500053 +0000 UTC m=+110.213658096" Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.859879 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-wrrss" podStartSLOduration=6.859857023 podStartE2EDuration="6.859857023s" podCreationTimestamp="2026-02-20 00:10:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:23.859545594 +0000 UTC m=+110.228203170" watchObservedRunningTime="2026-02-20 00:10:23.859857023 +0000 UTC m=+110.228514589" Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.883602 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" podStartSLOduration=86.883584382 podStartE2EDuration="1m26.883584382s" podCreationTimestamp="2026-02-20 00:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:23.882880292 +0000 UTC m=+110.251537858" watchObservedRunningTime="2026-02-20 00:10:23.883584382 +0000 UTC m=+110.252241948" Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.897854 5107 ???:1] "http: TLS handshake error from 192.168.126.11:53200: no serving certificate available for the kubelet" Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.897911 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-tcpmn" podStartSLOduration=86.89789308 podStartE2EDuration="1m26.89789308s" podCreationTimestamp="2026-02-20 00:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:23.897371435 +0000 UTC m=+110.266029001" watchObservedRunningTime="2026-02-20 00:10:23.89789308 +0000 UTC m=+110.266550646" Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.900635 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:23 crc kubenswrapper[5107]: E0220 00:10:23.900889 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:24.40080008 +0000 UTC m=+110.769457666 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.900972 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:23 crc kubenswrapper[5107]: E0220 00:10:23.901498 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:24.401481959 +0000 UTC m=+110.770139585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.904285 5107 patch_prober.go:28] interesting pod/downloads-747b44746d-p7fkg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 20 00:10:23 crc kubenswrapper[5107]: I0220 00:10:23.904343 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-p7fkg" podUID="cfdfef0c-4111-4a89-aa5a-bf317fc4a772" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.006642 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:24 crc kubenswrapper[5107]: E0220 00:10:24.006824 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:24.506797406 +0000 UTC m=+110.875454972 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.007349 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:24 crc kubenswrapper[5107]: E0220 00:10:24.008732 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:24.50871497 +0000 UTC m=+110.877372616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.055318 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.055789 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.081407 5107 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-4cqtl container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.18:8443/livez\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.081513 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" podUID="4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.18:8443/livez\": dial tcp 10.217.0.18:8443: connect: connection refused" Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.108600 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:24 crc kubenswrapper[5107]: E0220 00:10:24.108794 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:24.60876143 +0000 UTC m=+110.977418986 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.109117 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:24 crc kubenswrapper[5107]: E0220 00:10:24.109552 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:24.609531801 +0000 UTC m=+110.978189367 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.210694 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:24 crc kubenswrapper[5107]: E0220 00:10:24.210884 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:24.710857827 +0000 UTC m=+111.079515383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.211453 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:24 crc kubenswrapper[5107]: E0220 00:10:24.212096 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:24.712080051 +0000 UTC m=+111.080737617 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.312272 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:24 crc kubenswrapper[5107]: E0220 00:10:24.312608 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:24.812594545 +0000 UTC m=+111.181252101 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.414446 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:24 crc kubenswrapper[5107]: E0220 00:10:24.414962 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:24.914945959 +0000 UTC m=+111.283603525 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.516044 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:24 crc kubenswrapper[5107]: E0220 00:10:24.516252 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:25.016217444 +0000 UTC m=+111.384875010 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.516611 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:24 crc kubenswrapper[5107]: E0220 00:10:24.516948 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:25.016936274 +0000 UTC m=+111.385593840 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.606876 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.607028 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4ltpk container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.607065 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" podUID="6368211b-5c56-4570-a4e6-b6cf86b392f2" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.611238 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-sx776" event={"ID":"bc3238c4-513a-495d-835d-da98864cdb8d","Type":"ContainerStarted","Data":"0773dbcac2ad18d1e84cf6ec18b625adba59413d058fd7aac81b5c5cbc10c59b"} Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.617734 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:24 crc kubenswrapper[5107]: E0220 00:10:24.618362 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:25.118339872 +0000 UTC m=+111.486997438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.637767 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mr4hv" event={"ID":"6fe8d69b-d257-4f34-b535-177002797675","Type":"ContainerStarted","Data":"da700baa261d22bfef8284f8148753742fff2af3392d302496e54c18b69a9e15"} Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.671817 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-x8c42" event={"ID":"c1baa873-3e83-4156-a28a-002a10a6147a","Type":"ContainerStarted","Data":"5173a9631b6280b8bb680745be77d0b82dea3db7f9475f89c746a7ffe06ee7ce"} Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.679431 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-c42nb" event={"ID":"1d9c615f-40e1-433c-9607-fbd841b62901","Type":"ContainerStarted","Data":"857894d7913b2eefb4832d0f8b401dc5ae0e0c8fa96dd456ea462911332753c6"} Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.690826 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-kv84j" event={"ID":"f052856a-22ab-4525-92dc-3baef7ed956a","Type":"ContainerStarted","Data":"74093e3d3acaaa11da4d570f8d3bd55af7d22462e8ffa2d95b374d8ba32b0b13"} Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.694294 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-klv4w" event={"ID":"e282ef7a-b9af-4a1f-b4c9-7f9544c4ebcf","Type":"ContainerStarted","Data":"1dc897f06cea39e1c9864b0cb530fd993be32824a33347475b82a5c0bde55c1f"} Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.696081 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-gcwx4" event={"ID":"5c7008e5-9282-4238-a23b-67c75f7cc997","Type":"ContainerStarted","Data":"f3bc1f68cfa5389dd0f6faedfae5c68f2a8c4afd60ae2d26fe6b61cfe159d0c9"} Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.705663 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-kcbw2" event={"ID":"ec63a06b-520e-47da-b761-74cc3462ebd7","Type":"ContainerStarted","Data":"3dbf873025f70fea110d77c3e4eda3309fcbdcf72cca2d37c528187e87bba719"} Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.711641 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c9dcq" event={"ID":"b3931e83-9df1-49f4-8f33-5ca09792a062","Type":"ContainerStarted","Data":"11cb607d8412dfe1ba2900777c31300c75c92a9deb9127dabddb618e3f51486f"} Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.719369 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ph8fq" event={"ID":"9332ca24-50c6-4625-8e97-a6fd5dd849f3","Type":"ContainerStarted","Data":"bba3fc9dc62cc7b6079a44d647e9e85aee04409c68da7b9dcb874a28bb66c19b"} Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.720292 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:24 crc kubenswrapper[5107]: E0220 00:10:24.720615 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:25.220601874 +0000 UTC m=+111.589259440 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.723793 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-579ws" event={"ID":"88f0f58c-020c-4ce1-9f3d-3aa63ae92ddd","Type":"ContainerStarted","Data":"2b42a76d6d2cb18c9cfb56801512fca004000fa9afadf9cecbddbadf7088fde4"} Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.743912 5107 scope.go:117] "RemoveContainer" containerID="dedae9d10992c0717bf9a6a55742b97566a7e6ea9660a223cd9df127ca3dc627" Feb 20 00:10:24 crc kubenswrapper[5107]: E0220 00:10:24.744197 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.821316 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:24 crc kubenswrapper[5107]: E0220 00:10:24.821477 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:25.321451656 +0000 UTC m=+111.690109222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.821892 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:24 crc kubenswrapper[5107]: E0220 00:10:24.822341 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:25.32231493 +0000 UTC m=+111.690972496 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.854130 5107 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-dp7hb container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" start-of-body= Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.854196 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-dp7hb" podUID="79076589-1a80-4683-a090-5aa445b6eba8" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.855275 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-rq6pw" Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.855314 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-wrrss" Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.855324 5107 patch_prober.go:28] interesting pod/console-operator-67c89758df-rq6pw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/readyz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.855377 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-rq6pw" podUID="dccf73a9-9c9e-4211-8b18-ed7d205bf9d1" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/readyz\": dial tcp 10.217.0.16:8443: connect: connection refused" Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.874742 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-ph8fq" podStartSLOduration=87.874727467 podStartE2EDuration="1m27.874727467s" podCreationTimestamp="2026-02-20 00:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:24.871774865 +0000 UTC m=+111.240432441" watchObservedRunningTime="2026-02-20 00:10:24.874727467 +0000 UTC m=+111.243385033" Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.888733 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-wrrss" Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.894673 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-c42nb" podStartSLOduration=87.894654281 podStartE2EDuration="1m27.894654281s" podCreationTimestamp="2026-02-20 00:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:24.893416636 +0000 UTC m=+111.262074202" watchObservedRunningTime="2026-02-20 00:10:24.894654281 +0000 UTC m=+111.263311867" Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.917531 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-26g2b" podStartSLOduration=87.917493105 podStartE2EDuration="1m27.917493105s" podCreationTimestamp="2026-02-20 00:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:24.914967805 +0000 UTC m=+111.283625371" watchObservedRunningTime="2026-02-20 00:10:24.917493105 +0000 UTC m=+111.286150671" Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.923433 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:24 crc kubenswrapper[5107]: E0220 00:10:24.926431 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:25.426409453 +0000 UTC m=+111.795067019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.948613 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-579ws" podStartSLOduration=87.948576489 podStartE2EDuration="1m27.948576489s" podCreationTimestamp="2026-02-20 00:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:24.942893501 +0000 UTC m=+111.311551057" watchObservedRunningTime="2026-02-20 00:10:24.948576489 +0000 UTC m=+111.317234055" Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.978562 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-cvxll" Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.980516 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-klv4w" Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.990365 5107 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-cvxll container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.44:8080/healthz\": dial tcp 10.217.0.44:8080: connect: connection refused" start-of-body= Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.994211 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-cvxll" podUID="9833f13d-3814-43ad-afef-381d884e5950" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.44:8080/healthz\": dial tcp 10.217.0.44:8080: connect: connection refused" Feb 20 00:10:24 crc kubenswrapper[5107]: I0220 00:10:24.990414 5107 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-klv4w container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.000019 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-klv4w" podUID="e282ef7a-b9af-4a1f-b4c9-7f9544c4ebcf" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.000568 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c9dcq" podStartSLOduration=88.000548574 podStartE2EDuration="1m28.000548574s" podCreationTimestamp="2026-02-20 00:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:24.967027812 +0000 UTC m=+111.335685408" watchObservedRunningTime="2026-02-20 00:10:25.000548574 +0000 UTC m=+111.369206140" Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.021413 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-gcwx4" podStartSLOduration=88.021390743 podStartE2EDuration="1m28.021390743s" podCreationTimestamp="2026-02-20 00:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:25.013030461 +0000 UTC m=+111.381688027" watchObservedRunningTime="2026-02-20 00:10:25.021390743 +0000 UTC m=+111.390048309" Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.054616 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.057728 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-klv4w" podStartSLOduration=88.057712382 podStartE2EDuration="1m28.057712382s" podCreationTimestamp="2026-02-20 00:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:25.055980094 +0000 UTC m=+111.424637660" watchObservedRunningTime="2026-02-20 00:10:25.057712382 +0000 UTC m=+111.426369948" Feb 20 00:10:25 crc kubenswrapper[5107]: E0220 00:10:25.061117 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:25.561102627 +0000 UTC m=+111.929760193 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.082710 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-kcbw2" podStartSLOduration=8.082687006 podStartE2EDuration="8.082687006s" podCreationTimestamp="2026-02-20 00:10:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:25.077322007 +0000 UTC m=+111.445979573" watchObservedRunningTime="2026-02-20 00:10:25.082687006 +0000 UTC m=+111.451344572" Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.155848 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:25 crc kubenswrapper[5107]: E0220 00:10:25.156281 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:25.656265231 +0000 UTC m=+112.024922797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.240257 5107 ???:1] "http: TLS handshake error from 192.168.126.11:53206: no serving certificate available for the kubelet" Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.257330 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:25 crc kubenswrapper[5107]: E0220 00:10:25.257754 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:25.757734231 +0000 UTC m=+112.126391837 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.358093 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:25 crc kubenswrapper[5107]: E0220 00:10:25.358294 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:25.858249775 +0000 UTC m=+112.226907351 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.358635 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:25 crc kubenswrapper[5107]: E0220 00:10:25.358970 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:25.858957594 +0000 UTC m=+112.227615160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.459388 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:25 crc kubenswrapper[5107]: E0220 00:10:25.459591 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:25.95955831 +0000 UTC m=+112.328215886 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.459891 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:25 crc kubenswrapper[5107]: E0220 00:10:25.460551 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:25.960513577 +0000 UTC m=+112.329171153 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.560879 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:25 crc kubenswrapper[5107]: E0220 00:10:25.561409 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:26.06138845 +0000 UTC m=+112.430046016 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.611753 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4ltpk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 00:10:25 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Feb 20 00:10:25 crc kubenswrapper[5107]: [+]process-running ok Feb 20 00:10:25 crc kubenswrapper[5107]: healthz check failed Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.611837 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" podUID="6368211b-5c56-4570-a4e6-b6cf86b392f2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.662445 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:25 crc kubenswrapper[5107]: E0220 00:10:25.662755 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:26.162741897 +0000 UTC m=+112.531399463 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.733219 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-wkzt7" event={"ID":"b2556f1d-c6f7-47d6-adf8-b2e5fd522346","Type":"ContainerStarted","Data":"c66c4eca4f7c29927b6c7e95777cb3f07feeff1b43ead9967a77ef471e6eae62"} Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.742089 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-x8c42" event={"ID":"c1baa873-3e83-4156-a28a-002a10a6147a","Type":"ContainerStarted","Data":"d407a20007d5fc553a830b08514d2fd09d95af8cee75305ddde01189c44f6c0c"} Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.742870 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-x8c42" Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.745175 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-4sdjl" event={"ID":"20c92049-0ab0-4940-8a29-851dfe180b34","Type":"ContainerStarted","Data":"7b06793b3b4b14d0bb310538d0b5fc0354a971ed281735f73413beb4938c68ce"} Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.745405 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-4sdjl" Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.747185 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-lw4ks" event={"ID":"ff5f4955-00cc-43bc-aee5-55712109ce87","Type":"ContainerStarted","Data":"61869f4af172145368ac86d5dbc8ad4a4bb783fff54c9c8dbd5fda20c1a602c4"} Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.748988 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" event={"ID":"b7763f2e-cc78-4dd1-a5d8-599e880ed627","Type":"ContainerStarted","Data":"740565f68cb2dabe658102818d5b70eb0f91de426bd1df15e4b67d5b8543ec5b"} Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.749152 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.750826 5107 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-lp56s container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.37:6443/healthz\": dial tcp 10.217.0.37:6443: connect: connection refused" start-of-body= Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.750875 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" podUID="b7763f2e-cc78-4dd1-a5d8-599e880ed627" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.37:6443/healthz\": dial tcp 10.217.0.37:6443: connect: connection refused" Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.750982 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-kv84j" event={"ID":"f052856a-22ab-4525-92dc-3baef7ed956a","Type":"ContainerStarted","Data":"47ff266499e2fda4e4390cf3c78ad1d63f54c128e1b7d0e65da7446449e466c7"} Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.754591 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-m4t5z" event={"ID":"a78d5238-801b-4521-91d2-6b9bed68d61e","Type":"ContainerStarted","Data":"9cf8a1af8c582d08e1f22ffde8f935a9dc02cb81bcf4455e80f9f8e32eb6317b"} Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.757190 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-zg2vd" event={"ID":"b78a9450-1db5-496b-a8e7-2c12d8e5525f","Type":"ContainerStarted","Data":"a1cf809cf7b3cdd9a195bcf6aa68e97d7109bd714bf89ea6b78873f3aa8454d8"} Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.759921 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-kdptd" event={"ID":"55527a1c-71b2-4254-82ac-da17df407862","Type":"ContainerStarted","Data":"9c95317e5cc92e627462d7cea7e8f00a00b515f385bd88957919717816c58bfb"} Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.762447 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mr4hv" event={"ID":"6fe8d69b-d257-4f34-b535-177002797675","Type":"ContainerStarted","Data":"0567b9c27c51d3fb1bec4245e1428df582b63ee651d8aaa363ea5520b1414342"} Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.762973 5107 patch_prober.go:28] interesting pod/console-operator-67c89758df-rq6pw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/readyz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.763087 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-rq6pw" podUID="dccf73a9-9c9e-4211-8b18-ed7d205bf9d1" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/readyz\": dial tcp 10.217.0.16:8443: connect: connection refused" Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.762981 5107 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-klv4w container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.763294 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-klv4w" podUID="e282ef7a-b9af-4a1f-b4c9-7f9544c4ebcf" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.763332 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:25 crc kubenswrapper[5107]: E0220 00:10:25.763393 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:26.263376794 +0000 UTC m=+112.632034360 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.763889 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:25 crc kubenswrapper[5107]: E0220 00:10:25.764395 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:26.264382171 +0000 UTC m=+112.633039737 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.766308 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-c42nb" Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.766843 5107 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-cvxll container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.44:8080/healthz\": dial tcp 10.217.0.44:8080: connect: connection refused" start-of-body= Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.766894 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-cvxll" podUID="9833f13d-3814-43ad-afef-381d884e5950" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.44:8080/healthz\": dial tcp 10.217.0.44:8080: connect: connection refused" Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.768880 5107 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-c42nb container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.768928 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-c42nb" podUID="1d9c615f-40e1-433c-9607-fbd841b62901" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.782274 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-wkzt7" podStartSLOduration=88.782256288 podStartE2EDuration="1m28.782256288s" podCreationTimestamp="2026-02-20 00:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:25.781879128 +0000 UTC m=+112.150536694" watchObservedRunningTime="2026-02-20 00:10:25.782256288 +0000 UTC m=+112.150913854" Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.782655 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-sx776" podStartSLOduration=89.782648589 podStartE2EDuration="1m29.782648589s" podCreationTimestamp="2026-02-20 00:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:25.106168879 +0000 UTC m=+111.474826445" watchObservedRunningTime="2026-02-20 00:10:25.782648589 +0000 UTC m=+112.151306155" Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.837992 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-4sdjl" podStartSLOduration=88.837974407 podStartE2EDuration="1m28.837974407s" podCreationTimestamp="2026-02-20 00:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:25.81436054 +0000 UTC m=+112.183018106" watchObservedRunningTime="2026-02-20 00:10:25.837974407 +0000 UTC m=+112.206631983" Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.839671 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-mr4hv" podStartSLOduration=88.839664624 podStartE2EDuration="1m28.839664624s" podCreationTimestamp="2026-02-20 00:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:25.835061196 +0000 UTC m=+112.203718762" watchObservedRunningTime="2026-02-20 00:10:25.839664624 +0000 UTC m=+112.208322190" Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.861688 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-kdptd" podStartSLOduration=88.861672735 podStartE2EDuration="1m28.861672735s" podCreationTimestamp="2026-02-20 00:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:25.859039532 +0000 UTC m=+112.227697098" watchObservedRunningTime="2026-02-20 00:10:25.861672735 +0000 UTC m=+112.230330301" Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.868900 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:25 crc kubenswrapper[5107]: E0220 00:10:25.870999 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:26.370982264 +0000 UTC m=+112.739639830 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.897005 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-kv84j" podStartSLOduration=88.896990207 podStartE2EDuration="1m28.896990207s" podCreationTimestamp="2026-02-20 00:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:25.895518356 +0000 UTC m=+112.264175922" watchObservedRunningTime="2026-02-20 00:10:25.896990207 +0000 UTC m=+112.265647773" Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.922491 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-m4t5z" podStartSLOduration=88.922476125 podStartE2EDuration="1m28.922476125s" podCreationTimestamp="2026-02-20 00:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:25.921738405 +0000 UTC m=+112.290395971" watchObservedRunningTime="2026-02-20 00:10:25.922476125 +0000 UTC m=+112.291133691" Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.955413 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" podStartSLOduration=88.9553961 podStartE2EDuration="1m28.9553961s" podCreationTimestamp="2026-02-20 00:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:25.954549706 +0000 UTC m=+112.323207272" watchObservedRunningTime="2026-02-20 00:10:25.9553961 +0000 UTC m=+112.324053666" Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.974213 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:25 crc kubenswrapper[5107]: E0220 00:10:25.974547 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:26.474534732 +0000 UTC m=+112.843192298 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:25 crc kubenswrapper[5107]: I0220 00:10:25.982431 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-lw4ks" podStartSLOduration=88.982414171 podStartE2EDuration="1m28.982414171s" podCreationTimestamp="2026-02-20 00:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:25.980803556 +0000 UTC m=+112.349461122" watchObservedRunningTime="2026-02-20 00:10:25.982414171 +0000 UTC m=+112.351071737" Feb 20 00:10:26 crc kubenswrapper[5107]: I0220 00:10:26.006749 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-zg2vd" podStartSLOduration=89.006732957 podStartE2EDuration="1m29.006732957s" podCreationTimestamp="2026-02-20 00:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:26.003320182 +0000 UTC m=+112.371977748" watchObservedRunningTime="2026-02-20 00:10:26.006732957 +0000 UTC m=+112.375390523" Feb 20 00:10:26 crc kubenswrapper[5107]: I0220 00:10:26.010976 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-wrrss"] Feb 20 00:10:26 crc kubenswrapper[5107]: I0220 00:10:26.032351 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-x8c42" podStartSLOduration=9.032332248 podStartE2EDuration="9.032332248s" podCreationTimestamp="2026-02-20 00:10:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:26.031389302 +0000 UTC m=+112.400046868" watchObservedRunningTime="2026-02-20 00:10:26.032332248 +0000 UTC m=+112.400989814" Feb 20 00:10:26 crc kubenswrapper[5107]: I0220 00:10:26.075430 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:26 crc kubenswrapper[5107]: E0220 00:10:26.075616 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:26.57558969 +0000 UTC m=+112.944247256 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:26 crc kubenswrapper[5107]: I0220 00:10:26.079358 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:26 crc kubenswrapper[5107]: E0220 00:10:26.079825 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:26.579812668 +0000 UTC m=+112.948470234 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:26 crc kubenswrapper[5107]: I0220 00:10:26.180523 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:26 crc kubenswrapper[5107]: E0220 00:10:26.180747 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:26.680715792 +0000 UTC m=+113.049373358 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:26 crc kubenswrapper[5107]: I0220 00:10:26.282814 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:26 crc kubenswrapper[5107]: E0220 00:10:26.283264 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:26.783248631 +0000 UTC m=+113.151906187 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:26 crc kubenswrapper[5107]: I0220 00:10:26.383435 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:26 crc kubenswrapper[5107]: E0220 00:10:26.383642 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:26.88360914 +0000 UTC m=+113.252266706 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:26 crc kubenswrapper[5107]: I0220 00:10:26.383877 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:26 crc kubenswrapper[5107]: E0220 00:10:26.384313 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:26.88430317 +0000 UTC m=+113.252960816 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:26 crc kubenswrapper[5107]: I0220 00:10:26.485199 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:26 crc kubenswrapper[5107]: E0220 00:10:26.485468 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:26.985425369 +0000 UTC m=+113.354082935 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:26 crc kubenswrapper[5107]: I0220 00:10:26.485594 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:26 crc kubenswrapper[5107]: E0220 00:10:26.485907 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:26.985892792 +0000 UTC m=+113.354550358 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:26 crc kubenswrapper[5107]: I0220 00:10:26.587461 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:26 crc kubenswrapper[5107]: E0220 00:10:26.587634 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:27.087606209 +0000 UTC m=+113.456263775 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:26 crc kubenswrapper[5107]: I0220 00:10:26.587815 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:26 crc kubenswrapper[5107]: E0220 00:10:26.588156 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:27.088135703 +0000 UTC m=+113.456793269 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:26 crc kubenswrapper[5107]: I0220 00:10:26.611848 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4ltpk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 00:10:26 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Feb 20 00:10:26 crc kubenswrapper[5107]: [+]process-running ok Feb 20 00:10:26 crc kubenswrapper[5107]: healthz check failed Feb 20 00:10:26 crc kubenswrapper[5107]: I0220 00:10:26.611909 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" podUID="6368211b-5c56-4570-a4e6-b6cf86b392f2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 00:10:26 crc kubenswrapper[5107]: I0220 00:10:26.633365 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-dp7hb" Feb 20 00:10:26 crc kubenswrapper[5107]: I0220 00:10:26.689234 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:26 crc kubenswrapper[5107]: E0220 00:10:26.689659 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:27.189643795 +0000 UTC m=+113.558301361 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:26 crc kubenswrapper[5107]: I0220 00:10:26.768520 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-24bsp" event={"ID":"b4cec451-c20b-4fbe-bfc6-cac323ecd942","Type":"ContainerStarted","Data":"0b43d5c4c16d4f3f2cec0e5875e69741f23b3ea75696df02ae4b57989aafc76c"} Feb 20 00:10:26 crc kubenswrapper[5107]: I0220 00:10:26.769239 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-wrrss" podUID="01d70318-38f6-4dc0-acc4-36458ccf419c" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://b9e64a7933b7566bee3689cad101d004830ea2f6dc864c2ef830802020aef0cf" gracePeriod=30 Feb 20 00:10:26 crc kubenswrapper[5107]: I0220 00:10:26.770269 5107 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-lp56s container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.37:6443/healthz\": dial tcp 10.217.0.37:6443: connect: connection refused" start-of-body= Feb 20 00:10:26 crc kubenswrapper[5107]: I0220 00:10:26.770312 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" podUID="b7763f2e-cc78-4dd1-a5d8-599e880ed627" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.37:6443/healthz\": dial tcp 10.217.0.37:6443: connect: connection refused" Feb 20 00:10:26 crc kubenswrapper[5107]: I0220 00:10:26.782792 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-c42nb" Feb 20 00:10:26 crc kubenswrapper[5107]: I0220 00:10:26.790384 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:26 crc kubenswrapper[5107]: E0220 00:10:26.790717 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:27.290704723 +0000 UTC m=+113.659362289 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:26 crc kubenswrapper[5107]: I0220 00:10:26.796877 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-klv4w" Feb 20 00:10:26 crc kubenswrapper[5107]: I0220 00:10:26.855747 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-kgrwk" Feb 20 00:10:26 crc kubenswrapper[5107]: I0220 00:10:26.892002 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:26 crc kubenswrapper[5107]: E0220 00:10:26.892172 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:27.392120522 +0000 UTC m=+113.760778098 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:26 crc kubenswrapper[5107]: I0220 00:10:26.892519 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:26 crc kubenswrapper[5107]: E0220 00:10:26.892799 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:27.3927852 +0000 UTC m=+113.761442756 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:26 crc kubenswrapper[5107]: I0220 00:10:26.994642 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:26 crc kubenswrapper[5107]: E0220 00:10:26.994820 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:27.494776704 +0000 UTC m=+113.863434270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:26 crc kubenswrapper[5107]: I0220 00:10:26.995277 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:26 crc kubenswrapper[5107]: E0220 00:10:26.995571 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:27.495556116 +0000 UTC m=+113.864213682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:27 crc kubenswrapper[5107]: I0220 00:10:27.096266 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:27 crc kubenswrapper[5107]: E0220 00:10:27.096435 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:27.596403689 +0000 UTC m=+113.965061255 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:27 crc kubenswrapper[5107]: I0220 00:10:27.096714 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:27 crc kubenswrapper[5107]: E0220 00:10:27.097045 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:27.597031726 +0000 UTC m=+113.965689422 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:27 crc kubenswrapper[5107]: I0220 00:10:27.198759 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:27 crc kubenswrapper[5107]: E0220 00:10:27.198940 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:27.698912548 +0000 UTC m=+114.067570104 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:27 crc kubenswrapper[5107]: I0220 00:10:27.199057 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:27 crc kubenswrapper[5107]: E0220 00:10:27.199349 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:27.699336389 +0000 UTC m=+114.067993955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:27 crc kubenswrapper[5107]: I0220 00:10:27.300512 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:27 crc kubenswrapper[5107]: E0220 00:10:27.300668 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:27.800646115 +0000 UTC m=+114.169303691 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:27 crc kubenswrapper[5107]: I0220 00:10:27.300816 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:27 crc kubenswrapper[5107]: E0220 00:10:27.301187 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:27.80117446 +0000 UTC m=+114.169832026 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:27 crc kubenswrapper[5107]: I0220 00:10:27.401763 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:27 crc kubenswrapper[5107]: E0220 00:10:27.401918 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:27.901898009 +0000 UTC m=+114.270555575 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:27 crc kubenswrapper[5107]: I0220 00:10:27.402213 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:27 crc kubenswrapper[5107]: E0220 00:10:27.402530 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:27.902511406 +0000 UTC m=+114.271168972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:27 crc kubenswrapper[5107]: I0220 00:10:27.503508 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:27 crc kubenswrapper[5107]: E0220 00:10:27.503736 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:28.003696278 +0000 UTC m=+114.372353844 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:27 crc kubenswrapper[5107]: I0220 00:10:27.503837 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:27 crc kubenswrapper[5107]: E0220 00:10:27.504219 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:28.004207862 +0000 UTC m=+114.372865428 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:27 crc kubenswrapper[5107]: I0220 00:10:27.604997 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:27 crc kubenswrapper[5107]: E0220 00:10:27.605166 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:28.105132597 +0000 UTC m=+114.473790163 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:27 crc kubenswrapper[5107]: I0220 00:10:27.605494 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:27 crc kubenswrapper[5107]: E0220 00:10:27.605862 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:28.105824546 +0000 UTC m=+114.474482112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:27 crc kubenswrapper[5107]: I0220 00:10:27.611585 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4ltpk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 00:10:27 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Feb 20 00:10:27 crc kubenswrapper[5107]: [+]process-running ok Feb 20 00:10:27 crc kubenswrapper[5107]: healthz check failed Feb 20 00:10:27 crc kubenswrapper[5107]: I0220 00:10:27.611645 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" podUID="6368211b-5c56-4570-a4e6-b6cf86b392f2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 00:10:27 crc kubenswrapper[5107]: I0220 00:10:27.706596 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:27 crc kubenswrapper[5107]: E0220 00:10:27.706734 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:28.20671817 +0000 UTC m=+114.575375736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:27 crc kubenswrapper[5107]: I0220 00:10:27.706810 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:27 crc kubenswrapper[5107]: E0220 00:10:27.707087 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:28.20707945 +0000 UTC m=+114.575737016 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:27 crc kubenswrapper[5107]: I0220 00:10:27.775333 5107 generic.go:358] "Generic (PLEG): container finished" podID="da6b7a25-5740-4b62-ab8d-dd83057a3d7a" containerID="cbc9fbf7ec513a01bfb2754f7dc81e7312bdd298c83dc179f9aa1216e3ba0fc3" exitCode=0 Feb 20 00:10:27 crc kubenswrapper[5107]: I0220 00:10:27.775579 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525760-gk87r" event={"ID":"da6b7a25-5740-4b62-ab8d-dd83057a3d7a","Type":"ContainerDied","Data":"cbc9fbf7ec513a01bfb2754f7dc81e7312bdd298c83dc179f9aa1216e3ba0fc3"} Feb 20 00:10:27 crc kubenswrapper[5107]: I0220 00:10:27.808071 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:27 crc kubenswrapper[5107]: E0220 00:10:27.808278 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:28.308251002 +0000 UTC m=+114.676908568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:27 crc kubenswrapper[5107]: I0220 00:10:27.808386 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:27 crc kubenswrapper[5107]: E0220 00:10:27.808698 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:28.308679354 +0000 UTC m=+114.677336920 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:27 crc kubenswrapper[5107]: I0220 00:10:27.855062 5107 ???:1] "http: TLS handshake error from 192.168.126.11:41294: no serving certificate available for the kubelet" Feb 20 00:10:27 crc kubenswrapper[5107]: I0220 00:10:27.909667 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:27 crc kubenswrapper[5107]: E0220 00:10:27.909854 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:28.409827455 +0000 UTC m=+114.778485021 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:27 crc kubenswrapper[5107]: I0220 00:10:27.910497 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:27 crc kubenswrapper[5107]: E0220 00:10:27.910969 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:28.410948566 +0000 UTC m=+114.779606122 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.011830 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:28 crc kubenswrapper[5107]: E0220 00:10:28.012031 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:28.511996904 +0000 UTC m=+114.880654470 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.012182 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:28 crc kubenswrapper[5107]: E0220 00:10:28.012576 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:28.51255774 +0000 UTC m=+114.881215306 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.113694 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:28 crc kubenswrapper[5107]: E0220 00:10:28.113874 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:28.613846425 +0000 UTC m=+114.982503991 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.114434 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:28 crc kubenswrapper[5107]: E0220 00:10:28.114694 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:28.614685898 +0000 UTC m=+114.983343464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.215483 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:28 crc kubenswrapper[5107]: E0220 00:10:28.215687 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:28.715655964 +0000 UTC m=+115.084313540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.215960 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:28 crc kubenswrapper[5107]: E0220 00:10:28.216418 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:28.716398775 +0000 UTC m=+115.085056341 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.260370 5107 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.317067 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:28 crc kubenswrapper[5107]: E0220 00:10:28.317571 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:28.817543146 +0000 UTC m=+115.186200712 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.418401 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:28 crc kubenswrapper[5107]: E0220 00:10:28.418720 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:28.918704987 +0000 UTC m=+115.287362553 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.519939 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:28 crc kubenswrapper[5107]: E0220 00:10:28.520173 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-20 00:10:29.020127226 +0000 UTC m=+115.388784792 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.535439 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w2bt6"] Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.577992 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w2bt6"] Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.578203 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2bt6" Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.580077 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.606896 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4ltpk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 00:10:28 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Feb 20 00:10:28 crc kubenswrapper[5107]: [+]process-running ok Feb 20 00:10:28 crc kubenswrapper[5107]: healthz check failed Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.606963 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" podUID="6368211b-5c56-4570-a4e6-b6cf86b392f2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.621428 5107 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-20T00:10:28.260393457Z","UUID":"d0f25aac-ae20-49f4-9ba0-a110c78bb9c1","Handler":null,"Name":"","Endpoint":""} Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.623466 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:28 crc kubenswrapper[5107]: E0220 00:10:28.623910 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-20 00:10:29.123890649 +0000 UTC m=+115.492548285 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-7txlk" (UID: "f9644e65-d917-4c28-a428-743979d10f4e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.625710 5107 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.625737 5107 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.638769 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zn62" Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.644662 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zn62" Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.729048 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.729326 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5lqf\" (UniqueName: \"kubernetes.io/projected/873048c2-5622-40a5-be53-dbdbca3b95a7-kube-api-access-t5lqf\") pod \"community-operators-w2bt6\" (UID: \"873048c2-5622-40a5-be53-dbdbca3b95a7\") " pod="openshift-marketplace/community-operators-w2bt6" Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.729458 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/873048c2-5622-40a5-be53-dbdbca3b95a7-utilities\") pod \"community-operators-w2bt6\" (UID: \"873048c2-5622-40a5-be53-dbdbca3b95a7\") " pod="openshift-marketplace/community-operators-w2bt6" Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.729493 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/873048c2-5622-40a5-be53-dbdbca3b95a7-catalog-content\") pod \"community-operators-w2bt6\" (UID: \"873048c2-5622-40a5-be53-dbdbca3b95a7\") " pod="openshift-marketplace/community-operators-w2bt6" Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.735913 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.745943 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rf4ps"] Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.830703 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t5lqf\" (UniqueName: \"kubernetes.io/projected/873048c2-5622-40a5-be53-dbdbca3b95a7-kube-api-access-t5lqf\") pod \"community-operators-w2bt6\" (UID: \"873048c2-5622-40a5-be53-dbdbca3b95a7\") " pod="openshift-marketplace/community-operators-w2bt6" Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.830776 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.831086 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/873048c2-5622-40a5-be53-dbdbca3b95a7-utilities\") pod \"community-operators-w2bt6\" (UID: \"873048c2-5622-40a5-be53-dbdbca3b95a7\") " pod="openshift-marketplace/community-operators-w2bt6" Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.831202 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/873048c2-5622-40a5-be53-dbdbca3b95a7-catalog-content\") pod \"community-operators-w2bt6\" (UID: \"873048c2-5622-40a5-be53-dbdbca3b95a7\") " pod="openshift-marketplace/community-operators-w2bt6" Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.831916 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/873048c2-5622-40a5-be53-dbdbca3b95a7-catalog-content\") pod \"community-operators-w2bt6\" (UID: \"873048c2-5622-40a5-be53-dbdbca3b95a7\") " pod="openshift-marketplace/community-operators-w2bt6" Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.831943 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/873048c2-5622-40a5-be53-dbdbca3b95a7-utilities\") pod \"community-operators-w2bt6\" (UID: \"873048c2-5622-40a5-be53-dbdbca3b95a7\") " pod="openshift-marketplace/community-operators-w2bt6" Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.839455 5107 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.839500 5107 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.850377 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5lqf\" (UniqueName: \"kubernetes.io/projected/873048c2-5622-40a5-be53-dbdbca3b95a7-kube-api-access-t5lqf\") pod \"community-operators-w2bt6\" (UID: \"873048c2-5622-40a5-be53-dbdbca3b95a7\") " pod="openshift-marketplace/community-operators-w2bt6" Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.869674 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-7txlk\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.894709 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2bt6" Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.919162 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rf4ps"] Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.919201 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-24bsp" event={"ID":"b4cec451-c20b-4fbe-bfc6-cac323ecd942","Type":"ContainerStarted","Data":"4f4c755e3b8928e12980b13412c607b88c0e5c1852f3fcfc1cef19433ca70e3d"} Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.919223 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-24bsp" event={"ID":"b4cec451-c20b-4fbe-bfc6-cac323ecd942","Type":"ContainerStarted","Data":"819a8dd9da594f817aaa86ae8965785ca8a73e469797bcb3665c5dae623cb2f5"} Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.919236 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-24bsp" event={"ID":"b4cec451-c20b-4fbe-bfc6-cac323ecd942","Type":"ContainerStarted","Data":"b842793dcd87b590bcd36c7a47576424effe7b67562e22e2910a412a38810bb3"} Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.919358 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rf4ps" Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.922757 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.942359 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z54dq"] Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.977655 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-24bsp" podStartSLOduration=11.977636971 podStartE2EDuration="11.977636971s" podCreationTimestamp="2026-02-20 00:10:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:28.976849639 +0000 UTC m=+115.345507205" watchObservedRunningTime="2026-02-20 00:10:28.977636971 +0000 UTC m=+115.346294537" Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.992308 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z54dq"] Feb 20 00:10:28 crc kubenswrapper[5107]: I0220 00:10:28.992554 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z54dq" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.033751 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e148c20e-1d85-4049-b800-a0f1a42fd1ed-utilities\") pod \"certified-operators-rf4ps\" (UID: \"e148c20e-1d85-4049-b800-a0f1a42fd1ed\") " pod="openshift-marketplace/certified-operators-rf4ps" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.033870 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e148c20e-1d85-4049-b800-a0f1a42fd1ed-catalog-content\") pod \"certified-operators-rf4ps\" (UID: \"e148c20e-1d85-4049-b800-a0f1a42fd1ed\") " pod="openshift-marketplace/certified-operators-rf4ps" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.033990 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rptbf\" (UniqueName: \"kubernetes.io/projected/e148c20e-1d85-4049-b800-a0f1a42fd1ed-kube-api-access-rptbf\") pod \"certified-operators-rf4ps\" (UID: \"e148c20e-1d85-4049-b800-a0f1a42fd1ed\") " pod="openshift-marketplace/certified-operators-rf4ps" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.054433 5107 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-4cqtl container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 20 00:10:29 crc kubenswrapper[5107]: [+]log ok Feb 20 00:10:29 crc kubenswrapper[5107]: [+]etcd ok Feb 20 00:10:29 crc kubenswrapper[5107]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 20 00:10:29 crc kubenswrapper[5107]: [+]poststarthook/generic-apiserver-start-informers ok Feb 20 00:10:29 crc kubenswrapper[5107]: [+]poststarthook/max-in-flight-filter ok Feb 20 00:10:29 crc kubenswrapper[5107]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 20 00:10:29 crc kubenswrapper[5107]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 20 00:10:29 crc kubenswrapper[5107]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 20 00:10:29 crc kubenswrapper[5107]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Feb 20 00:10:29 crc kubenswrapper[5107]: [+]poststarthook/project.openshift.io-projectcache ok Feb 20 00:10:29 crc kubenswrapper[5107]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 20 00:10:29 crc kubenswrapper[5107]: [+]poststarthook/openshift.io-startinformers ok Feb 20 00:10:29 crc kubenswrapper[5107]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 20 00:10:29 crc kubenswrapper[5107]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 20 00:10:29 crc kubenswrapper[5107]: livez check failed Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.054515 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" podUID="4cfc52bc-6c55-45eb-9ce8-9d8ef0be1c8c" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.117737 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.135272 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rptbf\" (UniqueName: \"kubernetes.io/projected/e148c20e-1d85-4049-b800-a0f1a42fd1ed-kube-api-access-rptbf\") pod \"certified-operators-rf4ps\" (UID: \"e148c20e-1d85-4049-b800-a0f1a42fd1ed\") " pod="openshift-marketplace/certified-operators-rf4ps" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.135556 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7497k\" (UniqueName: \"kubernetes.io/projected/69b44045-c596-43d0-bf80-5e5c89671bef-kube-api-access-7497k\") pod \"community-operators-z54dq\" (UID: \"69b44045-c596-43d0-bf80-5e5c89671bef\") " pod="openshift-marketplace/community-operators-z54dq" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.135590 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e148c20e-1d85-4049-b800-a0f1a42fd1ed-utilities\") pod \"certified-operators-rf4ps\" (UID: \"e148c20e-1d85-4049-b800-a0f1a42fd1ed\") " pod="openshift-marketplace/certified-operators-rf4ps" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.135728 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e148c20e-1d85-4049-b800-a0f1a42fd1ed-catalog-content\") pod \"certified-operators-rf4ps\" (UID: \"e148c20e-1d85-4049-b800-a0f1a42fd1ed\") " pod="openshift-marketplace/certified-operators-rf4ps" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.135773 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69b44045-c596-43d0-bf80-5e5c89671bef-catalog-content\") pod \"community-operators-z54dq\" (UID: \"69b44045-c596-43d0-bf80-5e5c89671bef\") " pod="openshift-marketplace/community-operators-z54dq" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.135806 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69b44045-c596-43d0-bf80-5e5c89671bef-utilities\") pod \"community-operators-z54dq\" (UID: \"69b44045-c596-43d0-bf80-5e5c89671bef\") " pod="openshift-marketplace/community-operators-z54dq" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.136440 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e148c20e-1d85-4049-b800-a0f1a42fd1ed-utilities\") pod \"certified-operators-rf4ps\" (UID: \"e148c20e-1d85-4049-b800-a0f1a42fd1ed\") " pod="openshift-marketplace/certified-operators-rf4ps" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.136483 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e148c20e-1d85-4049-b800-a0f1a42fd1ed-catalog-content\") pod \"certified-operators-rf4ps\" (UID: \"e148c20e-1d85-4049-b800-a0f1a42fd1ed\") " pod="openshift-marketplace/certified-operators-rf4ps" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.136953 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gxqz9"] Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.159172 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gxqz9"] Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.159313 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gxqz9" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.165574 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rptbf\" (UniqueName: \"kubernetes.io/projected/e148c20e-1d85-4049-b800-a0f1a42fd1ed-kube-api-access-rptbf\") pod \"certified-operators-rf4ps\" (UID: \"e148c20e-1d85-4049-b800-a0f1a42fd1ed\") " pod="openshift-marketplace/certified-operators-rf4ps" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.217647 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525760-gk87r" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.238107 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7497k\" (UniqueName: \"kubernetes.io/projected/69b44045-c596-43d0-bf80-5e5c89671bef-kube-api-access-7497k\") pod \"community-operators-z54dq\" (UID: \"69b44045-c596-43d0-bf80-5e5c89671bef\") " pod="openshift-marketplace/community-operators-z54dq" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.238245 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69b44045-c596-43d0-bf80-5e5c89671bef-catalog-content\") pod \"community-operators-z54dq\" (UID: \"69b44045-c596-43d0-bf80-5e5c89671bef\") " pod="openshift-marketplace/community-operators-z54dq" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.238273 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69b44045-c596-43d0-bf80-5e5c89671bef-utilities\") pod \"community-operators-z54dq\" (UID: \"69b44045-c596-43d0-bf80-5e5c89671bef\") " pod="openshift-marketplace/community-operators-z54dq" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.238774 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69b44045-c596-43d0-bf80-5e5c89671bef-utilities\") pod \"community-operators-z54dq\" (UID: \"69b44045-c596-43d0-bf80-5e5c89671bef\") " pod="openshift-marketplace/community-operators-z54dq" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.239414 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69b44045-c596-43d0-bf80-5e5c89671bef-catalog-content\") pod \"community-operators-z54dq\" (UID: \"69b44045-c596-43d0-bf80-5e5c89671bef\") " pod="openshift-marketplace/community-operators-z54dq" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.265461 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7497k\" (UniqueName: \"kubernetes.io/projected/69b44045-c596-43d0-bf80-5e5c89671bef-kube-api-access-7497k\") pod \"community-operators-z54dq\" (UID: \"69b44045-c596-43d0-bf80-5e5c89671bef\") " pod="openshift-marketplace/community-operators-z54dq" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.314847 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rf4ps" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.321922 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z54dq" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.338833 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da6b7a25-5740-4b62-ab8d-dd83057a3d7a-secret-volume\") pod \"da6b7a25-5740-4b62-ab8d-dd83057a3d7a\" (UID: \"da6b7a25-5740-4b62-ab8d-dd83057a3d7a\") " Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.339107 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da6b7a25-5740-4b62-ab8d-dd83057a3d7a-config-volume\") pod \"da6b7a25-5740-4b62-ab8d-dd83057a3d7a\" (UID: \"da6b7a25-5740-4b62-ab8d-dd83057a3d7a\") " Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.339857 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7kj8\" (UniqueName: \"kubernetes.io/projected/da6b7a25-5740-4b62-ab8d-dd83057a3d7a-kube-api-access-l7kj8\") pod \"da6b7a25-5740-4b62-ab8d-dd83057a3d7a\" (UID: \"da6b7a25-5740-4b62-ab8d-dd83057a3d7a\") " Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.340361 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afb3e406-6312-4e7e-bcaa-f3c532a0c1ea-catalog-content\") pod \"certified-operators-gxqz9\" (UID: \"afb3e406-6312-4e7e-bcaa-f3c532a0c1ea\") " pod="openshift-marketplace/certified-operators-gxqz9" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.340495 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcgwg\" (UniqueName: \"kubernetes.io/projected/afb3e406-6312-4e7e-bcaa-f3c532a0c1ea-kube-api-access-jcgwg\") pod \"certified-operators-gxqz9\" (UID: \"afb3e406-6312-4e7e-bcaa-f3c532a0c1ea\") " pod="openshift-marketplace/certified-operators-gxqz9" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.340615 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afb3e406-6312-4e7e-bcaa-f3c532a0c1ea-utilities\") pod \"certified-operators-gxqz9\" (UID: \"afb3e406-6312-4e7e-bcaa-f3c532a0c1ea\") " pod="openshift-marketplace/certified-operators-gxqz9" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.339805 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da6b7a25-5740-4b62-ab8d-dd83057a3d7a-config-volume" (OuterVolumeSpecName: "config-volume") pod "da6b7a25-5740-4b62-ab8d-dd83057a3d7a" (UID: "da6b7a25-5740-4b62-ab8d-dd83057a3d7a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.348593 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da6b7a25-5740-4b62-ab8d-dd83057a3d7a-kube-api-access-l7kj8" (OuterVolumeSpecName: "kube-api-access-l7kj8") pod "da6b7a25-5740-4b62-ab8d-dd83057a3d7a" (UID: "da6b7a25-5740-4b62-ab8d-dd83057a3d7a"). InnerVolumeSpecName "kube-api-access-l7kj8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.349253 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da6b7a25-5740-4b62-ab8d-dd83057a3d7a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "da6b7a25-5740-4b62-ab8d-dd83057a3d7a" (UID: "da6b7a25-5740-4b62-ab8d-dd83057a3d7a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.359758 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-7txlk"] Feb 20 00:10:29 crc kubenswrapper[5107]: W0220 00:10:29.369304 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9644e65_d917_4c28_a428_743979d10f4e.slice/crio-54a9b420f99e36250b15c08fc4eda098e026f9aad9d1ef5b7977edfab5c43adb WatchSource:0}: Error finding container 54a9b420f99e36250b15c08fc4eda098e026f9aad9d1ef5b7977edfab5c43adb: Status 404 returned error can't find the container with id 54a9b420f99e36250b15c08fc4eda098e026f9aad9d1ef5b7977edfab5c43adb Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.442738 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afb3e406-6312-4e7e-bcaa-f3c532a0c1ea-catalog-content\") pod \"certified-operators-gxqz9\" (UID: \"afb3e406-6312-4e7e-bcaa-f3c532a0c1ea\") " pod="openshift-marketplace/certified-operators-gxqz9" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.442811 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jcgwg\" (UniqueName: \"kubernetes.io/projected/afb3e406-6312-4e7e-bcaa-f3c532a0c1ea-kube-api-access-jcgwg\") pod \"certified-operators-gxqz9\" (UID: \"afb3e406-6312-4e7e-bcaa-f3c532a0c1ea\") " pod="openshift-marketplace/certified-operators-gxqz9" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.442907 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afb3e406-6312-4e7e-bcaa-f3c532a0c1ea-utilities\") pod \"certified-operators-gxqz9\" (UID: \"afb3e406-6312-4e7e-bcaa-f3c532a0c1ea\") " pod="openshift-marketplace/certified-operators-gxqz9" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.443060 5107 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da6b7a25-5740-4b62-ab8d-dd83057a3d7a-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.443077 5107 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da6b7a25-5740-4b62-ab8d-dd83057a3d7a-config-volume\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.443089 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l7kj8\" (UniqueName: \"kubernetes.io/projected/da6b7a25-5740-4b62-ab8d-dd83057a3d7a-kube-api-access-l7kj8\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.443854 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afb3e406-6312-4e7e-bcaa-f3c532a0c1ea-utilities\") pod \"certified-operators-gxqz9\" (UID: \"afb3e406-6312-4e7e-bcaa-f3c532a0c1ea\") " pod="openshift-marketplace/certified-operators-gxqz9" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.444627 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afb3e406-6312-4e7e-bcaa-f3c532a0c1ea-catalog-content\") pod \"certified-operators-gxqz9\" (UID: \"afb3e406-6312-4e7e-bcaa-f3c532a0c1ea\") " pod="openshift-marketplace/certified-operators-gxqz9" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.462276 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcgwg\" (UniqueName: \"kubernetes.io/projected/afb3e406-6312-4e7e-bcaa-f3c532a0c1ea-kube-api-access-jcgwg\") pod \"certified-operators-gxqz9\" (UID: \"afb3e406-6312-4e7e-bcaa-f3c532a0c1ea\") " pod="openshift-marketplace/certified-operators-gxqz9" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.465905 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w2bt6"] Feb 20 00:10:29 crc kubenswrapper[5107]: W0220 00:10:29.473926 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod873048c2_5622_40a5_be53_dbdbca3b95a7.slice/crio-f2d065b47afe136e99d366fc34909bd8e284255d1eb57f097e0b816b93bdc190 WatchSource:0}: Error finding container f2d065b47afe136e99d366fc34909bd8e284255d1eb57f097e0b816b93bdc190: Status 404 returned error can't find the container with id f2d065b47afe136e99d366fc34909bd8e284255d1eb57f097e0b816b93bdc190 Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.517445 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gxqz9" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.518648 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z54dq"] Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.531339 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rf4ps"] Feb 20 00:10:29 crc kubenswrapper[5107]: W0220 00:10:29.538262 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69b44045_c596_43d0_bf80_5e5c89671bef.slice/crio-fb18d954cd9bc91cd579a3c8b788db3b2fb154e096a210d26620bbb124f81998 WatchSource:0}: Error finding container fb18d954cd9bc91cd579a3c8b788db3b2fb154e096a210d26620bbb124f81998: Status 404 returned error can't find the container with id fb18d954cd9bc91cd579a3c8b788db3b2fb154e096a210d26620bbb124f81998 Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.608549 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4ltpk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 00:10:29 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Feb 20 00:10:29 crc kubenswrapper[5107]: [+]process-running ok Feb 20 00:10:29 crc kubenswrapper[5107]: healthz check failed Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.608632 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" podUID="6368211b-5c56-4570-a4e6-b6cf86b392f2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.742575 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gxqz9"] Feb 20 00:10:29 crc kubenswrapper[5107]: W0220 00:10:29.765004 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafb3e406_6312_4e7e_bcaa_f3c532a0c1ea.slice/crio-14188e041540619f3da1e0c3e00ce913f9158bd6c697f6a95c6680c6fcc4d156 WatchSource:0}: Error finding container 14188e041540619f3da1e0c3e00ce913f9158bd6c697f6a95c6680c6fcc4d156: Status 404 returned error can't find the container with id 14188e041540619f3da1e0c3e00ce913f9158bd6c697f6a95c6680c6fcc4d156 Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.789287 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525760-gk87r" event={"ID":"da6b7a25-5740-4b62-ab8d-dd83057a3d7a","Type":"ContainerDied","Data":"c0f7ea4eab1b8be04af0c94e3c6151e151c1267d6cc8ccc877b6f9bf937c351a"} Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.789326 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0f7ea4eab1b8be04af0c94e3c6151e151c1267d6cc8ccc877b6f9bf937c351a" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.789406 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525760-gk87r" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.792483 5107 generic.go:358] "Generic (PLEG): container finished" podID="69b44045-c596-43d0-bf80-5e5c89671bef" containerID="6a77266d76a61dac2085923eee998b63f39e68d10b34de0151c84d8fa7b62679" exitCode=0 Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.792554 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z54dq" event={"ID":"69b44045-c596-43d0-bf80-5e5c89671bef","Type":"ContainerDied","Data":"6a77266d76a61dac2085923eee998b63f39e68d10b34de0151c84d8fa7b62679"} Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.792582 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z54dq" event={"ID":"69b44045-c596-43d0-bf80-5e5c89671bef","Type":"ContainerStarted","Data":"fb18d954cd9bc91cd579a3c8b788db3b2fb154e096a210d26620bbb124f81998"} Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.797271 5107 generic.go:358] "Generic (PLEG): container finished" podID="873048c2-5622-40a5-be53-dbdbca3b95a7" containerID="63ae451af7e325f883dfbd812995a085d5541c899c93f6daf6382248d049123c" exitCode=0 Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.797457 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2bt6" event={"ID":"873048c2-5622-40a5-be53-dbdbca3b95a7","Type":"ContainerDied","Data":"63ae451af7e325f883dfbd812995a085d5541c899c93f6daf6382248d049123c"} Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.797520 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2bt6" event={"ID":"873048c2-5622-40a5-be53-dbdbca3b95a7","Type":"ContainerStarted","Data":"f2d065b47afe136e99d366fc34909bd8e284255d1eb57f097e0b816b93bdc190"} Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.798986 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxqz9" event={"ID":"afb3e406-6312-4e7e-bcaa-f3c532a0c1ea","Type":"ContainerStarted","Data":"14188e041540619f3da1e0c3e00ce913f9158bd6c697f6a95c6680c6fcc4d156"} Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.800952 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-7txlk" event={"ID":"f9644e65-d917-4c28-a428-743979d10f4e","Type":"ContainerStarted","Data":"1182ac97b3ac52ce2cff91a2cc1d46c8e783b1d179b3e49ea783180dfd44abce"} Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.800996 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-7txlk" event={"ID":"f9644e65-d917-4c28-a428-743979d10f4e","Type":"ContainerStarted","Data":"54a9b420f99e36250b15c08fc4eda098e026f9aad9d1ef5b7977edfab5c43adb"} Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.801665 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.811859 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rf4ps" event={"ID":"e148c20e-1d85-4049-b800-a0f1a42fd1ed","Type":"ContainerStarted","Data":"7dce2d6b3c5ee386acca7820eba4bbfd624b6ac626264eca51f4d4a64a5d248b"} Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.811904 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rf4ps" event={"ID":"e148c20e-1d85-4049-b800-a0f1a42fd1ed","Type":"ContainerStarted","Data":"e1bfdb5e50566f257b7be4abbb352ab03391524d3c83f35a2bc1e0e3e539c428"} Feb 20 00:10:29 crc kubenswrapper[5107]: I0220 00:10:29.911630 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-7txlk" podStartSLOduration=92.911613586 podStartE2EDuration="1m32.911613586s" podCreationTimestamp="2026-02-20 00:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:29.868317373 +0000 UTC m=+116.236974939" watchObservedRunningTime="2026-02-20 00:10:29.911613586 +0000 UTC m=+116.280271152" Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.317347 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.318290 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="da6b7a25-5740-4b62-ab8d-dd83057a3d7a" containerName="collect-profiles" Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.318307 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="da6b7a25-5740-4b62-ab8d-dd83057a3d7a" containerName="collect-profiles" Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.318415 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="da6b7a25-5740-4b62-ab8d-dd83057a3d7a" containerName="collect-profiles" Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.439627 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.439784 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.442465 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.444796 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.475613 5107 patch_prober.go:28] interesting pod/downloads-747b44746d-p7fkg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.475699 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-p7fkg" podUID="cfdfef0c-4111-4a89-aa5a-bf317fc4a772" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.494675 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.534452 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rc7nq"] Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.540965 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rc7nq" Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.545112 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.557482 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d346e22-367f-4fff-8011-edab1a0809f5-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"2d346e22-367f-4fff-8011-edab1a0809f5\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.557534 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d346e22-367f-4fff-8011-edab1a0809f5-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"2d346e22-367f-4fff-8011-edab1a0809f5\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.560590 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rc7nq"] Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.607950 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4ltpk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 00:10:30 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Feb 20 00:10:30 crc kubenswrapper[5107]: [+]process-running ok Feb 20 00:10:30 crc kubenswrapper[5107]: healthz check failed Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.608006 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" podUID="6368211b-5c56-4570-a4e6-b6cf86b392f2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.659098 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szh8x\" (UniqueName: \"kubernetes.io/projected/582a976b-611f-4153-9c08-eb9f343b290f-kube-api-access-szh8x\") pod \"redhat-marketplace-rc7nq\" (UID: \"582a976b-611f-4153-9c08-eb9f343b290f\") " pod="openshift-marketplace/redhat-marketplace-rc7nq" Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.659241 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/582a976b-611f-4153-9c08-eb9f343b290f-catalog-content\") pod \"redhat-marketplace-rc7nq\" (UID: \"582a976b-611f-4153-9c08-eb9f343b290f\") " pod="openshift-marketplace/redhat-marketplace-rc7nq" Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.659297 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d346e22-367f-4fff-8011-edab1a0809f5-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"2d346e22-367f-4fff-8011-edab1a0809f5\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.659333 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d346e22-367f-4fff-8011-edab1a0809f5-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"2d346e22-367f-4fff-8011-edab1a0809f5\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.659388 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/582a976b-611f-4153-9c08-eb9f343b290f-utilities\") pod \"redhat-marketplace-rc7nq\" (UID: \"582a976b-611f-4153-9c08-eb9f343b290f\") " pod="openshift-marketplace/redhat-marketplace-rc7nq" Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.659806 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d346e22-367f-4fff-8011-edab1a0809f5-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"2d346e22-367f-4fff-8011-edab1a0809f5\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.680042 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d346e22-367f-4fff-8011-edab1a0809f5-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"2d346e22-367f-4fff-8011-edab1a0809f5\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.756815 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.760241 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/582a976b-611f-4153-9c08-eb9f343b290f-utilities\") pod \"redhat-marketplace-rc7nq\" (UID: \"582a976b-611f-4153-9c08-eb9f343b290f\") " pod="openshift-marketplace/redhat-marketplace-rc7nq" Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.760389 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-szh8x\" (UniqueName: \"kubernetes.io/projected/582a976b-611f-4153-9c08-eb9f343b290f-kube-api-access-szh8x\") pod \"redhat-marketplace-rc7nq\" (UID: \"582a976b-611f-4153-9c08-eb9f343b290f\") " pod="openshift-marketplace/redhat-marketplace-rc7nq" Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.760834 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/582a976b-611f-4153-9c08-eb9f343b290f-catalog-content\") pod \"redhat-marketplace-rc7nq\" (UID: \"582a976b-611f-4153-9c08-eb9f343b290f\") " pod="openshift-marketplace/redhat-marketplace-rc7nq" Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.760859 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/582a976b-611f-4153-9c08-eb9f343b290f-utilities\") pod \"redhat-marketplace-rc7nq\" (UID: \"582a976b-611f-4153-9c08-eb9f343b290f\") " pod="openshift-marketplace/redhat-marketplace-rc7nq" Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.761251 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/582a976b-611f-4153-9c08-eb9f343b290f-catalog-content\") pod \"redhat-marketplace-rc7nq\" (UID: \"582a976b-611f-4153-9c08-eb9f343b290f\") " pod="openshift-marketplace/redhat-marketplace-rc7nq" Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.782423 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-szh8x\" (UniqueName: \"kubernetes.io/projected/582a976b-611f-4153-9c08-eb9f343b290f-kube-api-access-szh8x\") pod \"redhat-marketplace-rc7nq\" (UID: \"582a976b-611f-4153-9c08-eb9f343b290f\") " pod="openshift-marketplace/redhat-marketplace-rc7nq" Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.820044 5107 generic.go:358] "Generic (PLEG): container finished" podID="afb3e406-6312-4e7e-bcaa-f3c532a0c1ea" containerID="6a7d69079efec247a532784840f1b61d7989409655477f4905cb11a75f790e20" exitCode=0 Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.820095 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxqz9" event={"ID":"afb3e406-6312-4e7e-bcaa-f3c532a0c1ea","Type":"ContainerDied","Data":"6a7d69079efec247a532784840f1b61d7989409655477f4905cb11a75f790e20"} Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.823746 5107 generic.go:358] "Generic (PLEG): container finished" podID="e148c20e-1d85-4049-b800-a0f1a42fd1ed" containerID="7dce2d6b3c5ee386acca7820eba4bbfd624b6ac626264eca51f4d4a64a5d248b" exitCode=0 Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.823820 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rf4ps" event={"ID":"e148c20e-1d85-4049-b800-a0f1a42fd1ed","Type":"ContainerDied","Data":"7dce2d6b3c5ee386acca7820eba4bbfd624b6ac626264eca51f4d4a64a5d248b"} Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.854490 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rc7nq" Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.922651 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-sx776" Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.922702 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-sx776" Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.925950 5107 patch_prober.go:28] interesting pod/console-64d44f6ddf-sx776 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.925995 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-sx776" podUID="bc3238c4-513a-495d-835d-da98864cdb8d" containerName="console" probeResult="failure" output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.932807 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ct7wl"] Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.949783 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ct7wl"] Feb 20 00:10:30 crc kubenswrapper[5107]: I0220 00:10:30.949904 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ct7wl" Feb 20 00:10:31 crc kubenswrapper[5107]: I0220 00:10:31.001742 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Feb 20 00:10:31 crc kubenswrapper[5107]: I0220 00:10:31.064427 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67484e0a-1c92-441c-9b19-892f98f62176-catalog-content\") pod \"redhat-marketplace-ct7wl\" (UID: \"67484e0a-1c92-441c-9b19-892f98f62176\") " pod="openshift-marketplace/redhat-marketplace-ct7wl" Feb 20 00:10:31 crc kubenswrapper[5107]: I0220 00:10:31.064655 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67484e0a-1c92-441c-9b19-892f98f62176-utilities\") pod \"redhat-marketplace-ct7wl\" (UID: \"67484e0a-1c92-441c-9b19-892f98f62176\") " pod="openshift-marketplace/redhat-marketplace-ct7wl" Feb 20 00:10:31 crc kubenswrapper[5107]: I0220 00:10:31.064678 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h89sl\" (UniqueName: \"kubernetes.io/projected/67484e0a-1c92-441c-9b19-892f98f62176-kube-api-access-h89sl\") pod \"redhat-marketplace-ct7wl\" (UID: \"67484e0a-1c92-441c-9b19-892f98f62176\") " pod="openshift-marketplace/redhat-marketplace-ct7wl" Feb 20 00:10:31 crc kubenswrapper[5107]: I0220 00:10:31.096330 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rc7nq"] Feb 20 00:10:31 crc kubenswrapper[5107]: I0220 00:10:31.166500 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67484e0a-1c92-441c-9b19-892f98f62176-utilities\") pod \"redhat-marketplace-ct7wl\" (UID: \"67484e0a-1c92-441c-9b19-892f98f62176\") " pod="openshift-marketplace/redhat-marketplace-ct7wl" Feb 20 00:10:31 crc kubenswrapper[5107]: I0220 00:10:31.166551 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h89sl\" (UniqueName: \"kubernetes.io/projected/67484e0a-1c92-441c-9b19-892f98f62176-kube-api-access-h89sl\") pod \"redhat-marketplace-ct7wl\" (UID: \"67484e0a-1c92-441c-9b19-892f98f62176\") " pod="openshift-marketplace/redhat-marketplace-ct7wl" Feb 20 00:10:31 crc kubenswrapper[5107]: I0220 00:10:31.166956 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67484e0a-1c92-441c-9b19-892f98f62176-catalog-content\") pod \"redhat-marketplace-ct7wl\" (UID: \"67484e0a-1c92-441c-9b19-892f98f62176\") " pod="openshift-marketplace/redhat-marketplace-ct7wl" Feb 20 00:10:31 crc kubenswrapper[5107]: I0220 00:10:31.167655 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67484e0a-1c92-441c-9b19-892f98f62176-utilities\") pod \"redhat-marketplace-ct7wl\" (UID: \"67484e0a-1c92-441c-9b19-892f98f62176\") " pod="openshift-marketplace/redhat-marketplace-ct7wl" Feb 20 00:10:31 crc kubenswrapper[5107]: I0220 00:10:31.167711 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67484e0a-1c92-441c-9b19-892f98f62176-catalog-content\") pod \"redhat-marketplace-ct7wl\" (UID: \"67484e0a-1c92-441c-9b19-892f98f62176\") " pod="openshift-marketplace/redhat-marketplace-ct7wl" Feb 20 00:10:31 crc kubenswrapper[5107]: I0220 00:10:31.190535 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h89sl\" (UniqueName: \"kubernetes.io/projected/67484e0a-1c92-441c-9b19-892f98f62176-kube-api-access-h89sl\") pod \"redhat-marketplace-ct7wl\" (UID: \"67484e0a-1c92-441c-9b19-892f98f62176\") " pod="openshift-marketplace/redhat-marketplace-ct7wl" Feb 20 00:10:31 crc kubenswrapper[5107]: I0220 00:10:31.218540 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-szvd6" Feb 20 00:10:31 crc kubenswrapper[5107]: I0220 00:10:31.220298 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-stc6t" Feb 20 00:10:31 crc kubenswrapper[5107]: I0220 00:10:31.279662 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ct7wl" Feb 20 00:10:31 crc kubenswrapper[5107]: I0220 00:10:31.527156 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ct7wl"] Feb 20 00:10:31 crc kubenswrapper[5107]: W0220 00:10:31.554968 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67484e0a_1c92_441c_9b19_892f98f62176.slice/crio-368f888a94c749ec4b9f29eff7394e6631590bf738e81cf2de247a9b8ce65c82 WatchSource:0}: Error finding container 368f888a94c749ec4b9f29eff7394e6631590bf738e81cf2de247a9b8ce65c82: Status 404 returned error can't find the container with id 368f888a94c749ec4b9f29eff7394e6631590bf738e81cf2de247a9b8ce65c82 Feb 20 00:10:31 crc kubenswrapper[5107]: I0220 00:10:31.604701 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" Feb 20 00:10:31 crc kubenswrapper[5107]: I0220 00:10:31.608384 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4ltpk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 00:10:31 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Feb 20 00:10:31 crc kubenswrapper[5107]: [+]process-running ok Feb 20 00:10:31 crc kubenswrapper[5107]: healthz check failed Feb 20 00:10:31 crc kubenswrapper[5107]: I0220 00:10:31.608426 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" podUID="6368211b-5c56-4570-a4e6-b6cf86b392f2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 00:10:31 crc kubenswrapper[5107]: I0220 00:10:31.838977 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"2d346e22-367f-4fff-8011-edab1a0809f5","Type":"ContainerStarted","Data":"8ace48b81210fa5648a606aaa19be2625bc194df21aa125c6de3a9539658ee59"} Feb 20 00:10:31 crc kubenswrapper[5107]: I0220 00:10:31.839056 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"2d346e22-367f-4fff-8011-edab1a0809f5","Type":"ContainerStarted","Data":"057012b907befbafd3088dc9f7d97112a58392bb6d3ba86c64e275b3725e7f76"} Feb 20 00:10:31 crc kubenswrapper[5107]: I0220 00:10:31.844471 5107 generic.go:358] "Generic (PLEG): container finished" podID="67484e0a-1c92-441c-9b19-892f98f62176" containerID="0db42d086e2586904cb569c178ed1c9b447de19a76ec055919be07d76366ec9f" exitCode=0 Feb 20 00:10:31 crc kubenswrapper[5107]: I0220 00:10:31.844502 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ct7wl" event={"ID":"67484e0a-1c92-441c-9b19-892f98f62176","Type":"ContainerDied","Data":"0db42d086e2586904cb569c178ed1c9b447de19a76ec055919be07d76366ec9f"} Feb 20 00:10:31 crc kubenswrapper[5107]: I0220 00:10:31.844543 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ct7wl" event={"ID":"67484e0a-1c92-441c-9b19-892f98f62176","Type":"ContainerStarted","Data":"368f888a94c749ec4b9f29eff7394e6631590bf738e81cf2de247a9b8ce65c82"} Feb 20 00:10:31 crc kubenswrapper[5107]: I0220 00:10:31.846789 5107 generic.go:358] "Generic (PLEG): container finished" podID="582a976b-611f-4153-9c08-eb9f343b290f" containerID="b0b3c3dbdfe7075d03727e8600a0ce01f99e3c28635d361cdce385b77ebd2553" exitCode=0 Feb 20 00:10:31 crc kubenswrapper[5107]: I0220 00:10:31.846907 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rc7nq" event={"ID":"582a976b-611f-4153-9c08-eb9f343b290f","Type":"ContainerDied","Data":"b0b3c3dbdfe7075d03727e8600a0ce01f99e3c28635d361cdce385b77ebd2553"} Feb 20 00:10:31 crc kubenswrapper[5107]: I0220 00:10:31.846946 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rc7nq" event={"ID":"582a976b-611f-4153-9c08-eb9f343b290f","Type":"ContainerStarted","Data":"9b495beaad35c6babc007a8524700bd99e84636c47d00db521375b29a5041190"} Feb 20 00:10:31 crc kubenswrapper[5107]: I0220 00:10:31.853511 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-crc" podStartSLOduration=1.8534951130000001 podStartE2EDuration="1.853495113s" podCreationTimestamp="2026-02-20 00:10:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:31.853284867 +0000 UTC m=+118.221942463" watchObservedRunningTime="2026-02-20 00:10:31.853495113 +0000 UTC m=+118.222152679" Feb 20 00:10:31 crc kubenswrapper[5107]: I0220 00:10:31.936127 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4kmt8"] Feb 20 00:10:31 crc kubenswrapper[5107]: I0220 00:10:31.948737 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4kmt8" Feb 20 00:10:31 crc kubenswrapper[5107]: I0220 00:10:31.949675 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4kmt8"] Feb 20 00:10:31 crc kubenswrapper[5107]: I0220 00:10:31.951267 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Feb 20 00:10:32 crc kubenswrapper[5107]: I0220 00:10:32.094283 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpnqd\" (UniqueName: \"kubernetes.io/projected/9613fac6-e4cf-4553-b8a7-7b52986c7e27-kube-api-access-hpnqd\") pod \"redhat-operators-4kmt8\" (UID: \"9613fac6-e4cf-4553-b8a7-7b52986c7e27\") " pod="openshift-marketplace/redhat-operators-4kmt8" Feb 20 00:10:32 crc kubenswrapper[5107]: I0220 00:10:32.094551 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9613fac6-e4cf-4553-b8a7-7b52986c7e27-utilities\") pod \"redhat-operators-4kmt8\" (UID: \"9613fac6-e4cf-4553-b8a7-7b52986c7e27\") " pod="openshift-marketplace/redhat-operators-4kmt8" Feb 20 00:10:32 crc kubenswrapper[5107]: I0220 00:10:32.094573 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9613fac6-e4cf-4553-b8a7-7b52986c7e27-catalog-content\") pod \"redhat-operators-4kmt8\" (UID: \"9613fac6-e4cf-4553-b8a7-7b52986c7e27\") " pod="openshift-marketplace/redhat-operators-4kmt8" Feb 20 00:10:32 crc kubenswrapper[5107]: I0220 00:10:32.196209 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hpnqd\" (UniqueName: \"kubernetes.io/projected/9613fac6-e4cf-4553-b8a7-7b52986c7e27-kube-api-access-hpnqd\") pod \"redhat-operators-4kmt8\" (UID: \"9613fac6-e4cf-4553-b8a7-7b52986c7e27\") " pod="openshift-marketplace/redhat-operators-4kmt8" Feb 20 00:10:32 crc kubenswrapper[5107]: I0220 00:10:32.196272 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9613fac6-e4cf-4553-b8a7-7b52986c7e27-utilities\") pod \"redhat-operators-4kmt8\" (UID: \"9613fac6-e4cf-4553-b8a7-7b52986c7e27\") " pod="openshift-marketplace/redhat-operators-4kmt8" Feb 20 00:10:32 crc kubenswrapper[5107]: I0220 00:10:32.196294 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9613fac6-e4cf-4553-b8a7-7b52986c7e27-catalog-content\") pod \"redhat-operators-4kmt8\" (UID: \"9613fac6-e4cf-4553-b8a7-7b52986c7e27\") " pod="openshift-marketplace/redhat-operators-4kmt8" Feb 20 00:10:32 crc kubenswrapper[5107]: I0220 00:10:32.196813 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9613fac6-e4cf-4553-b8a7-7b52986c7e27-catalog-content\") pod \"redhat-operators-4kmt8\" (UID: \"9613fac6-e4cf-4553-b8a7-7b52986c7e27\") " pod="openshift-marketplace/redhat-operators-4kmt8" Feb 20 00:10:32 crc kubenswrapper[5107]: I0220 00:10:32.197309 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9613fac6-e4cf-4553-b8a7-7b52986c7e27-utilities\") pod \"redhat-operators-4kmt8\" (UID: \"9613fac6-e4cf-4553-b8a7-7b52986c7e27\") " pod="openshift-marketplace/redhat-operators-4kmt8" Feb 20 00:10:32 crc kubenswrapper[5107]: I0220 00:10:32.221941 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpnqd\" (UniqueName: \"kubernetes.io/projected/9613fac6-e4cf-4553-b8a7-7b52986c7e27-kube-api-access-hpnqd\") pod \"redhat-operators-4kmt8\" (UID: \"9613fac6-e4cf-4553-b8a7-7b52986c7e27\") " pod="openshift-marketplace/redhat-operators-4kmt8" Feb 20 00:10:32 crc kubenswrapper[5107]: I0220 00:10:32.275382 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4kmt8" Feb 20 00:10:32 crc kubenswrapper[5107]: I0220 00:10:32.341413 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jvrxj"] Feb 20 00:10:32 crc kubenswrapper[5107]: W0220 00:10:32.573304 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9613fac6_e4cf_4553_b8a7_7b52986c7e27.slice/crio-92e56d2e76573fe7f7a622e58c493ba44db4f796bad0931d1c522a62f6606ccc WatchSource:0}: Error finding container 92e56d2e76573fe7f7a622e58c493ba44db4f796bad0931d1c522a62f6606ccc: Status 404 returned error can't find the container with id 92e56d2e76573fe7f7a622e58c493ba44db4f796bad0931d1c522a62f6606ccc Feb 20 00:10:32 crc kubenswrapper[5107]: I0220 00:10:32.608121 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4ltpk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 00:10:32 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Feb 20 00:10:32 crc kubenswrapper[5107]: [+]process-running ok Feb 20 00:10:32 crc kubenswrapper[5107]: healthz check failed Feb 20 00:10:32 crc kubenswrapper[5107]: I0220 00:10:32.608197 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" podUID="6368211b-5c56-4570-a4e6-b6cf86b392f2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 00:10:32 crc kubenswrapper[5107]: I0220 00:10:32.704019 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jvrxj"] Feb 20 00:10:32 crc kubenswrapper[5107]: I0220 00:10:32.704717 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jvrxj" Feb 20 00:10:32 crc kubenswrapper[5107]: I0220 00:10:32.718252 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4kmt8"] Feb 20 00:10:32 crc kubenswrapper[5107]: I0220 00:10:32.812575 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58158355-aadb-4a44-8f4e-3e0c20d702e6-catalog-content\") pod \"redhat-operators-jvrxj\" (UID: \"58158355-aadb-4a44-8f4e-3e0c20d702e6\") " pod="openshift-marketplace/redhat-operators-jvrxj" Feb 20 00:10:32 crc kubenswrapper[5107]: I0220 00:10:32.812715 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5qqm\" (UniqueName: \"kubernetes.io/projected/58158355-aadb-4a44-8f4e-3e0c20d702e6-kube-api-access-j5qqm\") pod \"redhat-operators-jvrxj\" (UID: \"58158355-aadb-4a44-8f4e-3e0c20d702e6\") " pod="openshift-marketplace/redhat-operators-jvrxj" Feb 20 00:10:32 crc kubenswrapper[5107]: I0220 00:10:32.812762 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58158355-aadb-4a44-8f4e-3e0c20d702e6-utilities\") pod \"redhat-operators-jvrxj\" (UID: \"58158355-aadb-4a44-8f4e-3e0c20d702e6\") " pod="openshift-marketplace/redhat-operators-jvrxj" Feb 20 00:10:32 crc kubenswrapper[5107]: I0220 00:10:32.859129 5107 generic.go:358] "Generic (PLEG): container finished" podID="2d346e22-367f-4fff-8011-edab1a0809f5" containerID="8ace48b81210fa5648a606aaa19be2625bc194df21aa125c6de3a9539658ee59" exitCode=0 Feb 20 00:10:32 crc kubenswrapper[5107]: I0220 00:10:32.859187 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"2d346e22-367f-4fff-8011-edab1a0809f5","Type":"ContainerDied","Data":"8ace48b81210fa5648a606aaa19be2625bc194df21aa125c6de3a9539658ee59"} Feb 20 00:10:32 crc kubenswrapper[5107]: I0220 00:10:32.874455 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4kmt8" event={"ID":"9613fac6-e4cf-4553-b8a7-7b52986c7e27","Type":"ContainerStarted","Data":"92e56d2e76573fe7f7a622e58c493ba44db4f796bad0931d1c522a62f6606ccc"} Feb 20 00:10:32 crc kubenswrapper[5107]: I0220 00:10:32.914708 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58158355-aadb-4a44-8f4e-3e0c20d702e6-utilities\") pod \"redhat-operators-jvrxj\" (UID: \"58158355-aadb-4a44-8f4e-3e0c20d702e6\") " pod="openshift-marketplace/redhat-operators-jvrxj" Feb 20 00:10:32 crc kubenswrapper[5107]: I0220 00:10:32.915002 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58158355-aadb-4a44-8f4e-3e0c20d702e6-catalog-content\") pod \"redhat-operators-jvrxj\" (UID: \"58158355-aadb-4a44-8f4e-3e0c20d702e6\") " pod="openshift-marketplace/redhat-operators-jvrxj" Feb 20 00:10:32 crc kubenswrapper[5107]: I0220 00:10:32.915247 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j5qqm\" (UniqueName: \"kubernetes.io/projected/58158355-aadb-4a44-8f4e-3e0c20d702e6-kube-api-access-j5qqm\") pod \"redhat-operators-jvrxj\" (UID: \"58158355-aadb-4a44-8f4e-3e0c20d702e6\") " pod="openshift-marketplace/redhat-operators-jvrxj" Feb 20 00:10:32 crc kubenswrapper[5107]: I0220 00:10:32.915801 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58158355-aadb-4a44-8f4e-3e0c20d702e6-utilities\") pod \"redhat-operators-jvrxj\" (UID: \"58158355-aadb-4a44-8f4e-3e0c20d702e6\") " pod="openshift-marketplace/redhat-operators-jvrxj" Feb 20 00:10:32 crc kubenswrapper[5107]: I0220 00:10:32.932011 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58158355-aadb-4a44-8f4e-3e0c20d702e6-catalog-content\") pod \"redhat-operators-jvrxj\" (UID: \"58158355-aadb-4a44-8f4e-3e0c20d702e6\") " pod="openshift-marketplace/redhat-operators-jvrxj" Feb 20 00:10:32 crc kubenswrapper[5107]: I0220 00:10:32.939895 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5qqm\" (UniqueName: \"kubernetes.io/projected/58158355-aadb-4a44-8f4e-3e0c20d702e6-kube-api-access-j5qqm\") pod \"redhat-operators-jvrxj\" (UID: \"58158355-aadb-4a44-8f4e-3e0c20d702e6\") " pod="openshift-marketplace/redhat-operators-jvrxj" Feb 20 00:10:33 crc kubenswrapper[5107]: I0220 00:10:33.006063 5107 ???:1] "http: TLS handshake error from 192.168.126.11:41296: no serving certificate available for the kubelet" Feb 20 00:10:33 crc kubenswrapper[5107]: I0220 00:10:33.039486 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jvrxj" Feb 20 00:10:33 crc kubenswrapper[5107]: I0220 00:10:33.086736 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Feb 20 00:10:33 crc kubenswrapper[5107]: I0220 00:10:33.102073 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 20 00:10:33 crc kubenswrapper[5107]: I0220 00:10:33.106544 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Feb 20 00:10:33 crc kubenswrapper[5107]: I0220 00:10:33.106769 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Feb 20 00:10:33 crc kubenswrapper[5107]: I0220 00:10:33.108114 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Feb 20 00:10:33 crc kubenswrapper[5107]: I0220 00:10:33.220941 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/95617b10-56d5-420d-bf7c-5baf308736dc-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"95617b10-56d5-420d-bf7c-5baf308736dc\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 20 00:10:33 crc kubenswrapper[5107]: I0220 00:10:33.221087 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/95617b10-56d5-420d-bf7c-5baf308736dc-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"95617b10-56d5-420d-bf7c-5baf308736dc\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 20 00:10:33 crc kubenswrapper[5107]: I0220 00:10:33.324883 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/95617b10-56d5-420d-bf7c-5baf308736dc-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"95617b10-56d5-420d-bf7c-5baf308736dc\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 20 00:10:33 crc kubenswrapper[5107]: I0220 00:10:33.324989 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/95617b10-56d5-420d-bf7c-5baf308736dc-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"95617b10-56d5-420d-bf7c-5baf308736dc\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 20 00:10:33 crc kubenswrapper[5107]: I0220 00:10:33.325499 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/95617b10-56d5-420d-bf7c-5baf308736dc-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"95617b10-56d5-420d-bf7c-5baf308736dc\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 20 00:10:33 crc kubenswrapper[5107]: I0220 00:10:33.340792 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jvrxj"] Feb 20 00:10:33 crc kubenswrapper[5107]: I0220 00:10:33.355859 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/95617b10-56d5-420d-bf7c-5baf308736dc-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"95617b10-56d5-420d-bf7c-5baf308736dc\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 20 00:10:33 crc kubenswrapper[5107]: W0220 00:10:33.372238 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58158355_aadb_4a44_8f4e_3e0c20d702e6.slice/crio-d1f10e6592006da36cf376970971b1e6732cd0dd5ce4066152bc17f4e8e476ca WatchSource:0}: Error finding container d1f10e6592006da36cf376970971b1e6732cd0dd5ce4066152bc17f4e8e476ca: Status 404 returned error can't find the container with id d1f10e6592006da36cf376970971b1e6732cd0dd5ce4066152bc17f4e8e476ca Feb 20 00:10:33 crc kubenswrapper[5107]: I0220 00:10:33.429470 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 20 00:10:33 crc kubenswrapper[5107]: I0220 00:10:33.611888 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4ltpk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 00:10:33 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Feb 20 00:10:33 crc kubenswrapper[5107]: [+]process-running ok Feb 20 00:10:33 crc kubenswrapper[5107]: healthz check failed Feb 20 00:10:33 crc kubenswrapper[5107]: I0220 00:10:33.611946 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" podUID="6368211b-5c56-4570-a4e6-b6cf86b392f2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 00:10:33 crc kubenswrapper[5107]: I0220 00:10:33.714386 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Feb 20 00:10:33 crc kubenswrapper[5107]: I0220 00:10:33.884375 5107 generic.go:358] "Generic (PLEG): container finished" podID="9613fac6-e4cf-4553-b8a7-7b52986c7e27" containerID="c6ff514936c52b8d3bba5332fc980553f3c00d0248bdd1d0d1829b07e0b52be6" exitCode=0 Feb 20 00:10:33 crc kubenswrapper[5107]: I0220 00:10:33.884465 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4kmt8" event={"ID":"9613fac6-e4cf-4553-b8a7-7b52986c7e27","Type":"ContainerDied","Data":"c6ff514936c52b8d3bba5332fc980553f3c00d0248bdd1d0d1829b07e0b52be6"} Feb 20 00:10:33 crc kubenswrapper[5107]: I0220 00:10:33.888059 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"95617b10-56d5-420d-bf7c-5baf308736dc","Type":"ContainerStarted","Data":"c108ca183591cbf1806261e1b1940394975e3d72ef6c13b2ce0a12e12d790985"} Feb 20 00:10:33 crc kubenswrapper[5107]: I0220 00:10:33.890855 5107 generic.go:358] "Generic (PLEG): container finished" podID="58158355-aadb-4a44-8f4e-3e0c20d702e6" containerID="186519ba74071db50302d9f0db88edcec6ccfb105f19c6f42aa8f6f7edf5a3af" exitCode=0 Feb 20 00:10:33 crc kubenswrapper[5107]: I0220 00:10:33.892118 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvrxj" event={"ID":"58158355-aadb-4a44-8f4e-3e0c20d702e6","Type":"ContainerDied","Data":"186519ba74071db50302d9f0db88edcec6ccfb105f19c6f42aa8f6f7edf5a3af"} Feb 20 00:10:33 crc kubenswrapper[5107]: I0220 00:10:33.892165 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvrxj" event={"ID":"58158355-aadb-4a44-8f4e-3e0c20d702e6","Type":"ContainerStarted","Data":"d1f10e6592006da36cf376970971b1e6732cd0dd5ce4066152bc17f4e8e476ca"} Feb 20 00:10:33 crc kubenswrapper[5107]: I0220 00:10:33.905351 5107 patch_prober.go:28] interesting pod/downloads-747b44746d-p7fkg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 20 00:10:33 crc kubenswrapper[5107]: I0220 00:10:33.905399 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-p7fkg" podUID="cfdfef0c-4111-4a89-aa5a-bf317fc4a772" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 20 00:10:34 crc kubenswrapper[5107]: I0220 00:10:34.054302 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" Feb 20 00:10:34 crc kubenswrapper[5107]: I0220 00:10:34.059459 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-4cqtl" Feb 20 00:10:34 crc kubenswrapper[5107]: I0220 00:10:34.178020 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 20 00:10:34 crc kubenswrapper[5107]: I0220 00:10:34.251929 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:10:34 crc kubenswrapper[5107]: I0220 00:10:34.252186 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:10:34 crc kubenswrapper[5107]: I0220 00:10:34.252343 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 20 00:10:34 crc kubenswrapper[5107]: I0220 00:10:34.252640 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 20 00:10:34 crc kubenswrapper[5107]: I0220 00:10:34.267518 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:10:34 crc kubenswrapper[5107]: I0220 00:10:34.267623 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:10:34 crc kubenswrapper[5107]: I0220 00:10:34.268357 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 20 00:10:34 crc kubenswrapper[5107]: I0220 00:10:34.272922 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 20 00:10:34 crc kubenswrapper[5107]: I0220 00:10:34.358623 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d346e22-367f-4fff-8011-edab1a0809f5-kubelet-dir\") pod \"2d346e22-367f-4fff-8011-edab1a0809f5\" (UID: \"2d346e22-367f-4fff-8011-edab1a0809f5\") " Feb 20 00:10:34 crc kubenswrapper[5107]: I0220 00:10:34.358700 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d346e22-367f-4fff-8011-edab1a0809f5-kube-api-access\") pod \"2d346e22-367f-4fff-8011-edab1a0809f5\" (UID: \"2d346e22-367f-4fff-8011-edab1a0809f5\") " Feb 20 00:10:34 crc kubenswrapper[5107]: I0220 00:10:34.358983 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cee716c2-1a9a-4944-9b9f-06284973b167-metrics-certs\") pod \"network-metrics-daemon-j2l2p\" (UID: \"cee716c2-1a9a-4944-9b9f-06284973b167\") " pod="openshift-multus/network-metrics-daemon-j2l2p" Feb 20 00:10:34 crc kubenswrapper[5107]: I0220 00:10:34.359283 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d346e22-367f-4fff-8011-edab1a0809f5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2d346e22-367f-4fff-8011-edab1a0809f5" (UID: "2d346e22-367f-4fff-8011-edab1a0809f5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:10:34 crc kubenswrapper[5107]: I0220 00:10:34.360736 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Feb 20 00:10:34 crc kubenswrapper[5107]: I0220 00:10:34.362337 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d346e22-367f-4fff-8011-edab1a0809f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2d346e22-367f-4fff-8011-edab1a0809f5" (UID: "2d346e22-367f-4fff-8011-edab1a0809f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:34 crc kubenswrapper[5107]: I0220 00:10:34.372378 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cee716c2-1a9a-4944-9b9f-06284973b167-metrics-certs\") pod \"network-metrics-daemon-j2l2p\" (UID: \"cee716c2-1a9a-4944-9b9f-06284973b167\") " pod="openshift-multus/network-metrics-daemon-j2l2p" Feb 20 00:10:34 crc kubenswrapper[5107]: I0220 00:10:34.409436 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 20 00:10:34 crc kubenswrapper[5107]: I0220 00:10:34.427299 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 20 00:10:34 crc kubenswrapper[5107]: I0220 00:10:34.431883 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Feb 20 00:10:34 crc kubenswrapper[5107]: I0220 00:10:34.437223 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j2l2p" Feb 20 00:10:34 crc kubenswrapper[5107]: I0220 00:10:34.471256 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d346e22-367f-4fff-8011-edab1a0809f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:34 crc kubenswrapper[5107]: I0220 00:10:34.471389 5107 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d346e22-367f-4fff-8011-edab1a0809f5-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:34 crc kubenswrapper[5107]: I0220 00:10:34.502235 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 20 00:10:34 crc kubenswrapper[5107]: I0220 00:10:34.668857 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4ltpk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 00:10:34 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Feb 20 00:10:34 crc kubenswrapper[5107]: [+]process-running ok Feb 20 00:10:34 crc kubenswrapper[5107]: healthz check failed Feb 20 00:10:34 crc kubenswrapper[5107]: I0220 00:10:34.669343 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" podUID="6368211b-5c56-4570-a4e6-b6cf86b392f2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 00:10:34 crc kubenswrapper[5107]: W0220 00:10:34.780365 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17b87002_b798_480a_8e17_83053d698239.slice/crio-e76af4b4d2fbff618923c365133e7b73087c09affe1bd770ee113859877bd599 WatchSource:0}: Error finding container e76af4b4d2fbff618923c365133e7b73087c09affe1bd770ee113859877bd599: Status 404 returned error can't find the container with id e76af4b4d2fbff618923c365133e7b73087c09affe1bd770ee113859877bd599 Feb 20 00:10:34 crc kubenswrapper[5107]: E0220 00:10:34.857393 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b9e64a7933b7566bee3689cad101d004830ea2f6dc864c2ef830802020aef0cf" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 00:10:34 crc kubenswrapper[5107]: E0220 00:10:34.868270 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b9e64a7933b7566bee3689cad101d004830ea2f6dc864c2ef830802020aef0cf" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 00:10:34 crc kubenswrapper[5107]: E0220 00:10:34.869624 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b9e64a7933b7566bee3689cad101d004830ea2f6dc864c2ef830802020aef0cf" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 00:10:34 crc kubenswrapper[5107]: E0220 00:10:34.869700 5107 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-wrrss" podUID="01d70318-38f6-4dc0-acc4-36458ccf419c" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Feb 20 00:10:34 crc kubenswrapper[5107]: I0220 00:10:34.912300 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"95617b10-56d5-420d-bf7c-5baf308736dc","Type":"ContainerStarted","Data":"fbfb292b5ae721a4cb584d1241b22d69b4e595d095de8a005f02b6566c5ad5fc"} Feb 20 00:10:34 crc kubenswrapper[5107]: I0220 00:10:34.923647 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"2d346e22-367f-4fff-8011-edab1a0809f5","Type":"ContainerDied","Data":"057012b907befbafd3088dc9f7d97112a58392bb6d3ba86c64e275b3725e7f76"} Feb 20 00:10:34 crc kubenswrapper[5107]: I0220 00:10:34.923700 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="057012b907befbafd3088dc9f7d97112a58392bb6d3ba86c64e275b3725e7f76" Feb 20 00:10:34 crc kubenswrapper[5107]: I0220 00:10:34.923833 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 20 00:10:34 crc kubenswrapper[5107]: I0220 00:10:34.926720 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-j2l2p"] Feb 20 00:10:34 crc kubenswrapper[5107]: I0220 00:10:34.926851 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=1.926828494 podStartE2EDuration="1.926828494s" podCreationTimestamp="2026-02-20 00:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:10:34.92634496 +0000 UTC m=+121.295002526" watchObservedRunningTime="2026-02-20 00:10:34.926828494 +0000 UTC m=+121.295486060" Feb 20 00:10:34 crc kubenswrapper[5107]: I0220 00:10:34.934697 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"e76af4b4d2fbff618923c365133e7b73087c09affe1bd770ee113859877bd599"} Feb 20 00:10:35 crc kubenswrapper[5107]: I0220 00:10:35.607643 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4ltpk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 00:10:35 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Feb 20 00:10:35 crc kubenswrapper[5107]: [+]process-running ok Feb 20 00:10:35 crc kubenswrapper[5107]: healthz check failed Feb 20 00:10:35 crc kubenswrapper[5107]: I0220 00:10:35.607985 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" podUID="6368211b-5c56-4570-a4e6-b6cf86b392f2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 00:10:35 crc kubenswrapper[5107]: I0220 00:10:35.767434 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-rq6pw" Feb 20 00:10:35 crc kubenswrapper[5107]: I0220 00:10:35.769302 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-cvxll" Feb 20 00:10:35 crc kubenswrapper[5107]: I0220 00:10:35.941278 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"30ab249add3210a25d5d983adb4acc1db9917906f5066f18e8ca8d14839b1568"} Feb 20 00:10:35 crc kubenswrapper[5107]: I0220 00:10:35.942238 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 20 00:10:35 crc kubenswrapper[5107]: I0220 00:10:35.943514 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"e684deac54fd156e32084cc669a6ab714f6acbf1c5f460560c5cf2310fea6e54"} Feb 20 00:10:35 crc kubenswrapper[5107]: I0220 00:10:35.943540 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"d0fdd0b28a894e6f4ce972d151e05b1e1c3c05ae1cc20eee1fc29bad9aa99455"} Feb 20 00:10:35 crc kubenswrapper[5107]: I0220 00:10:35.945732 5107 generic.go:358] "Generic (PLEG): container finished" podID="95617b10-56d5-420d-bf7c-5baf308736dc" containerID="fbfb292b5ae721a4cb584d1241b22d69b4e595d095de8a005f02b6566c5ad5fc" exitCode=0 Feb 20 00:10:35 crc kubenswrapper[5107]: I0220 00:10:35.945933 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"95617b10-56d5-420d-bf7c-5baf308736dc","Type":"ContainerDied","Data":"fbfb292b5ae721a4cb584d1241b22d69b4e595d095de8a005f02b6566c5ad5fc"} Feb 20 00:10:35 crc kubenswrapper[5107]: I0220 00:10:35.947873 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-j2l2p" event={"ID":"cee716c2-1a9a-4944-9b9f-06284973b167","Type":"ContainerStarted","Data":"8f64b9ffd57257520379cd62b25ee0a03e81766081f5b8f465d81e6dbdb4a76d"} Feb 20 00:10:35 crc kubenswrapper[5107]: I0220 00:10:35.947903 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-j2l2p" event={"ID":"cee716c2-1a9a-4944-9b9f-06284973b167","Type":"ContainerStarted","Data":"5f52a31966b561f89210fd93459bb67ae2bc89447fd06e99349ac859bb0f377b"} Feb 20 00:10:35 crc kubenswrapper[5107]: I0220 00:10:35.948942 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"bb8ec498a0108f28ae5d250c812f903b2403913a8b303a7d5fc227b89edd8b9d"} Feb 20 00:10:35 crc kubenswrapper[5107]: I0220 00:10:35.948970 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"9783317f85645d76f5f9de45d4f4202f8879c57e1fb0331b6fe8e98a1fdd1cc0"} Feb 20 00:10:36 crc kubenswrapper[5107]: I0220 00:10:36.607374 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4ltpk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 00:10:36 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Feb 20 00:10:36 crc kubenswrapper[5107]: [+]process-running ok Feb 20 00:10:36 crc kubenswrapper[5107]: healthz check failed Feb 20 00:10:36 crc kubenswrapper[5107]: I0220 00:10:36.607678 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" podUID="6368211b-5c56-4570-a4e6-b6cf86b392f2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 00:10:36 crc kubenswrapper[5107]: I0220 00:10:36.775298 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:10:36 crc kubenswrapper[5107]: I0220 00:10:36.778575 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-x8c42" Feb 20 00:10:37 crc kubenswrapper[5107]: I0220 00:10:37.610781 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4ltpk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 00:10:37 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Feb 20 00:10:37 crc kubenswrapper[5107]: [+]process-running ok Feb 20 00:10:37 crc kubenswrapper[5107]: healthz check failed Feb 20 00:10:37 crc kubenswrapper[5107]: I0220 00:10:37.610856 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" podUID="6368211b-5c56-4570-a4e6-b6cf86b392f2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 00:10:38 crc kubenswrapper[5107]: I0220 00:10:38.487921 5107 scope.go:117] "RemoveContainer" containerID="dedae9d10992c0717bf9a6a55742b97566a7e6ea9660a223cd9df127ca3dc627" Feb 20 00:10:38 crc kubenswrapper[5107]: I0220 00:10:38.606449 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4ltpk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 00:10:38 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Feb 20 00:10:38 crc kubenswrapper[5107]: [+]process-running ok Feb 20 00:10:38 crc kubenswrapper[5107]: healthz check failed Feb 20 00:10:38 crc kubenswrapper[5107]: I0220 00:10:38.606540 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" podUID="6368211b-5c56-4570-a4e6-b6cf86b392f2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 00:10:38 crc kubenswrapper[5107]: I0220 00:10:38.771396 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 20 00:10:38 crc kubenswrapper[5107]: I0220 00:10:38.849718 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/95617b10-56d5-420d-bf7c-5baf308736dc-kube-api-access\") pod \"95617b10-56d5-420d-bf7c-5baf308736dc\" (UID: \"95617b10-56d5-420d-bf7c-5baf308736dc\") " Feb 20 00:10:38 crc kubenswrapper[5107]: I0220 00:10:38.849776 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/95617b10-56d5-420d-bf7c-5baf308736dc-kubelet-dir\") pod \"95617b10-56d5-420d-bf7c-5baf308736dc\" (UID: \"95617b10-56d5-420d-bf7c-5baf308736dc\") " Feb 20 00:10:38 crc kubenswrapper[5107]: I0220 00:10:38.849956 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95617b10-56d5-420d-bf7c-5baf308736dc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "95617b10-56d5-420d-bf7c-5baf308736dc" (UID: "95617b10-56d5-420d-bf7c-5baf308736dc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:10:38 crc kubenswrapper[5107]: I0220 00:10:38.850237 5107 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/95617b10-56d5-420d-bf7c-5baf308736dc-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:38 crc kubenswrapper[5107]: I0220 00:10:38.856356 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95617b10-56d5-420d-bf7c-5baf308736dc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "95617b10-56d5-420d-bf7c-5baf308736dc" (UID: "95617b10-56d5-420d-bf7c-5baf308736dc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:10:38 crc kubenswrapper[5107]: I0220 00:10:38.951653 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/95617b10-56d5-420d-bf7c-5baf308736dc-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 20 00:10:38 crc kubenswrapper[5107]: I0220 00:10:38.980531 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 20 00:10:38 crc kubenswrapper[5107]: I0220 00:10:38.980532 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"95617b10-56d5-420d-bf7c-5baf308736dc","Type":"ContainerDied","Data":"c108ca183591cbf1806261e1b1940394975e3d72ef6c13b2ce0a12e12d790985"} Feb 20 00:10:38 crc kubenswrapper[5107]: I0220 00:10:38.980659 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c108ca183591cbf1806261e1b1940394975e3d72ef6c13b2ce0a12e12d790985" Feb 20 00:10:39 crc kubenswrapper[5107]: I0220 00:10:39.606918 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4ltpk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 00:10:39 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Feb 20 00:10:39 crc kubenswrapper[5107]: [+]process-running ok Feb 20 00:10:39 crc kubenswrapper[5107]: healthz check failed Feb 20 00:10:39 crc kubenswrapper[5107]: I0220 00:10:39.607012 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" podUID="6368211b-5c56-4570-a4e6-b6cf86b392f2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 00:10:40 crc kubenswrapper[5107]: I0220 00:10:40.607686 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4ltpk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 00:10:40 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Feb 20 00:10:40 crc kubenswrapper[5107]: [+]process-running ok Feb 20 00:10:40 crc kubenswrapper[5107]: healthz check failed Feb 20 00:10:40 crc kubenswrapper[5107]: I0220 00:10:40.608373 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" podUID="6368211b-5c56-4570-a4e6-b6cf86b392f2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 00:10:40 crc kubenswrapper[5107]: I0220 00:10:40.923234 5107 patch_prober.go:28] interesting pod/console-64d44f6ddf-sx776 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Feb 20 00:10:40 crc kubenswrapper[5107]: I0220 00:10:40.923323 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-sx776" podUID="bc3238c4-513a-495d-835d-da98864cdb8d" containerName="console" probeResult="failure" output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" Feb 20 00:10:41 crc kubenswrapper[5107]: I0220 00:10:41.764305 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4ltpk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 00:10:41 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Feb 20 00:10:41 crc kubenswrapper[5107]: [+]process-running ok Feb 20 00:10:41 crc kubenswrapper[5107]: healthz check failed Feb 20 00:10:41 crc kubenswrapper[5107]: I0220 00:10:41.764405 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" podUID="6368211b-5c56-4570-a4e6-b6cf86b392f2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 00:10:42 crc kubenswrapper[5107]: I0220 00:10:42.607460 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4ltpk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 00:10:42 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Feb 20 00:10:42 crc kubenswrapper[5107]: [+]process-running ok Feb 20 00:10:42 crc kubenswrapper[5107]: healthz check failed Feb 20 00:10:42 crc kubenswrapper[5107]: I0220 00:10:42.607588 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" podUID="6368211b-5c56-4570-a4e6-b6cf86b392f2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 00:10:43 crc kubenswrapper[5107]: I0220 00:10:43.026796 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:10:43 crc kubenswrapper[5107]: I0220 00:10:43.274029 5107 ???:1] "http: TLS handshake error from 192.168.126.11:41712: no serving certificate available for the kubelet" Feb 20 00:10:43 crc kubenswrapper[5107]: I0220 00:10:43.608042 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4ltpk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 00:10:43 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Feb 20 00:10:43 crc kubenswrapper[5107]: [+]process-running ok Feb 20 00:10:43 crc kubenswrapper[5107]: healthz check failed Feb 20 00:10:43 crc kubenswrapper[5107]: I0220 00:10:43.608189 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" podUID="6368211b-5c56-4570-a4e6-b6cf86b392f2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 00:10:43 crc kubenswrapper[5107]: I0220 00:10:43.908614 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-p7fkg" Feb 20 00:10:44 crc kubenswrapper[5107]: I0220 00:10:44.614435 5107 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-4ltpk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 00:10:44 crc kubenswrapper[5107]: [-]has-synced failed: reason withheld Feb 20 00:10:44 crc kubenswrapper[5107]: [+]process-running ok Feb 20 00:10:44 crc kubenswrapper[5107]: healthz check failed Feb 20 00:10:44 crc kubenswrapper[5107]: I0220 00:10:44.615039 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" podUID="6368211b-5c56-4570-a4e6-b6cf86b392f2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 00:10:44 crc kubenswrapper[5107]: E0220 00:10:44.856058 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b9e64a7933b7566bee3689cad101d004830ea2f6dc864c2ef830802020aef0cf" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 00:10:44 crc kubenswrapper[5107]: E0220 00:10:44.857448 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b9e64a7933b7566bee3689cad101d004830ea2f6dc864c2ef830802020aef0cf" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 00:10:44 crc kubenswrapper[5107]: E0220 00:10:44.859029 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b9e64a7933b7566bee3689cad101d004830ea2f6dc864c2ef830802020aef0cf" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 00:10:44 crc kubenswrapper[5107]: E0220 00:10:44.859082 5107 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-wrrss" podUID="01d70318-38f6-4dc0-acc4-36458ccf419c" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Feb 20 00:10:45 crc kubenswrapper[5107]: I0220 00:10:45.608415 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" Feb 20 00:10:45 crc kubenswrapper[5107]: I0220 00:10:45.612017 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-4ltpk" Feb 20 00:10:46 crc kubenswrapper[5107]: I0220 00:10:46.449003 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-szvd6"] Feb 20 00:10:46 crc kubenswrapper[5107]: I0220 00:10:46.449548 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-szvd6" podUID="c4f6c375-0a3f-4a66-908a-ac8180dba919" containerName="controller-manager" containerID="cri-o://b9e636dc729eb1510c99f1000be11203ed0f0cdaa6e898b0ab477523bde5c7d6" gracePeriod=30 Feb 20 00:10:46 crc kubenswrapper[5107]: I0220 00:10:46.465349 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-stc6t"] Feb 20 00:10:46 crc kubenswrapper[5107]: I0220 00:10:46.465629 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-stc6t" podUID="38c88f45-4bc8-4153-962b-f3449bbb53ad" containerName="route-controller-manager" containerID="cri-o://f2645fd0f3e75391e6f4c115d95bb8c0c6b966ef1f116b66c79912c293657c0e" gracePeriod=30 Feb 20 00:10:47 crc kubenswrapper[5107]: I0220 00:10:47.027167 5107 generic.go:358] "Generic (PLEG): container finished" podID="38c88f45-4bc8-4153-962b-f3449bbb53ad" containerID="f2645fd0f3e75391e6f4c115d95bb8c0c6b966ef1f116b66c79912c293657c0e" exitCode=0 Feb 20 00:10:47 crc kubenswrapper[5107]: I0220 00:10:47.027269 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-stc6t" event={"ID":"38c88f45-4bc8-4153-962b-f3449bbb53ad","Type":"ContainerDied","Data":"f2645fd0f3e75391e6f4c115d95bb8c0c6b966ef1f116b66c79912c293657c0e"} Feb 20 00:10:47 crc kubenswrapper[5107]: I0220 00:10:47.040946 5107 generic.go:358] "Generic (PLEG): container finished" podID="c4f6c375-0a3f-4a66-908a-ac8180dba919" containerID="b9e636dc729eb1510c99f1000be11203ed0f0cdaa6e898b0ab477523bde5c7d6" exitCode=0 Feb 20 00:10:47 crc kubenswrapper[5107]: I0220 00:10:47.041010 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-szvd6" event={"ID":"c4f6c375-0a3f-4a66-908a-ac8180dba919","Type":"ContainerDied","Data":"b9e636dc729eb1510c99f1000be11203ed0f0cdaa6e898b0ab477523bde5c7d6"} Feb 20 00:10:50 crc kubenswrapper[5107]: I0220 00:10:50.929539 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-sx776" Feb 20 00:10:50 crc kubenswrapper[5107]: I0220 00:10:50.938958 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-sx776" Feb 20 00:10:51 crc kubenswrapper[5107]: I0220 00:10:51.215075 5107 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-stc6t container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 20 00:10:51 crc kubenswrapper[5107]: I0220 00:10:51.215192 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-stc6t" podUID="38c88f45-4bc8-4153-962b-f3449bbb53ad" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 20 00:10:51 crc kubenswrapper[5107]: I0220 00:10:51.215086 5107 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-szvd6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 20 00:10:51 crc kubenswrapper[5107]: I0220 00:10:51.215299 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-szvd6" podUID="c4f6c375-0a3f-4a66-908a-ac8180dba919" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 20 00:10:51 crc kubenswrapper[5107]: I0220 00:10:51.851905 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:10:54 crc kubenswrapper[5107]: E0220 00:10:54.857574 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b9e64a7933b7566bee3689cad101d004830ea2f6dc864c2ef830802020aef0cf" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 00:10:54 crc kubenswrapper[5107]: E0220 00:10:54.861330 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b9e64a7933b7566bee3689cad101d004830ea2f6dc864c2ef830802020aef0cf" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 00:10:54 crc kubenswrapper[5107]: E0220 00:10:54.867622 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b9e64a7933b7566bee3689cad101d004830ea2f6dc864c2ef830802020aef0cf" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 00:10:54 crc kubenswrapper[5107]: E0220 00:10:54.867927 5107 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-wrrss" podUID="01d70318-38f6-4dc0-acc4-36458ccf419c" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Feb 20 00:10:55 crc kubenswrapper[5107]: I0220 00:10:55.094650 5107 generic.go:358] "Generic (PLEG): container finished" podID="1094e93d-2606-43c0-8b23-334bab811610" containerID="6544fac22d4c5363837b02856094b1f794bd587c6dbb0e8af4e12b4ff2fb4947" exitCode=0 Feb 20 00:10:55 crc kubenswrapper[5107]: I0220 00:10:55.094822 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29525760-tv4jx" event={"ID":"1094e93d-2606-43c0-8b23-334bab811610","Type":"ContainerDied","Data":"6544fac22d4c5363837b02856094b1f794bd587c6dbb0e8af4e12b4ff2fb4947"} Feb 20 00:10:56 crc kubenswrapper[5107]: I0220 00:10:56.867433 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-4sdjl" Feb 20 00:11:01 crc kubenswrapper[5107]: I0220 00:11:01.214897 5107 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-szvd6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 20 00:11:01 crc kubenswrapper[5107]: I0220 00:11:01.215818 5107 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-stc6t container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 20 00:11:01 crc kubenswrapper[5107]: I0220 00:11:01.216361 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-szvd6" podUID="c4f6c375-0a3f-4a66-908a-ac8180dba919" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 20 00:11:01 crc kubenswrapper[5107]: I0220 00:11:01.216655 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-stc6t" podUID="38c88f45-4bc8-4153-962b-f3449bbb53ad" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 20 00:11:03 crc kubenswrapper[5107]: I0220 00:11:03.788996 5107 ???:1] "http: TLS handshake error from 192.168.126.11:54840: no serving certificate available for the kubelet" Feb 20 00:11:04 crc kubenswrapper[5107]: E0220 00:11:04.856843 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b9e64a7933b7566bee3689cad101d004830ea2f6dc864c2ef830802020aef0cf" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 00:11:04 crc kubenswrapper[5107]: E0220 00:11:04.859128 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b9e64a7933b7566bee3689cad101d004830ea2f6dc864c2ef830802020aef0cf" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 00:11:04 crc kubenswrapper[5107]: E0220 00:11:04.861081 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b9e64a7933b7566bee3689cad101d004830ea2f6dc864c2ef830802020aef0cf" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 00:11:04 crc kubenswrapper[5107]: E0220 00:11:04.861205 5107 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-wrrss" podUID="01d70318-38f6-4dc0-acc4-36458ccf419c" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Feb 20 00:11:06 crc kubenswrapper[5107]: I0220 00:11:06.891964 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Feb 20 00:11:06 crc kubenswrapper[5107]: I0220 00:11:06.893521 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2d346e22-367f-4fff-8011-edab1a0809f5" containerName="pruner" Feb 20 00:11:06 crc kubenswrapper[5107]: I0220 00:11:06.893637 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d346e22-367f-4fff-8011-edab1a0809f5" containerName="pruner" Feb 20 00:11:06 crc kubenswrapper[5107]: I0220 00:11:06.893731 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="95617b10-56d5-420d-bf7c-5baf308736dc" containerName="pruner" Feb 20 00:11:06 crc kubenswrapper[5107]: I0220 00:11:06.893812 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="95617b10-56d5-420d-bf7c-5baf308736dc" containerName="pruner" Feb 20 00:11:06 crc kubenswrapper[5107]: I0220 00:11:06.894062 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="2d346e22-367f-4fff-8011-edab1a0809f5" containerName="pruner" Feb 20 00:11:06 crc kubenswrapper[5107]: I0220 00:11:06.894180 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="95617b10-56d5-420d-bf7c-5baf308736dc" containerName="pruner" Feb 20 00:11:08 crc kubenswrapper[5107]: I0220 00:11:08.634123 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 20 00:11:08 crc kubenswrapper[5107]: I0220 00:11:08.638574 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Feb 20 00:11:08 crc kubenswrapper[5107]: I0220 00:11:08.638606 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Feb 20 00:11:08 crc kubenswrapper[5107]: I0220 00:11:08.660725 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Feb 20 00:11:08 crc kubenswrapper[5107]: I0220 00:11:08.660840 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 20 00:11:08 crc kubenswrapper[5107]: I0220 00:11:08.784784 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9c5bf76b-3e95-425c-911a-5078511bd0f3-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"9c5bf76b-3e95-425c-911a-5078511bd0f3\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 20 00:11:08 crc kubenswrapper[5107]: I0220 00:11:08.784894 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c5bf76b-3e95-425c-911a-5078511bd0f3-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"9c5bf76b-3e95-425c-911a-5078511bd0f3\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 20 00:11:08 crc kubenswrapper[5107]: I0220 00:11:08.886866 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9c5bf76b-3e95-425c-911a-5078511bd0f3-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"9c5bf76b-3e95-425c-911a-5078511bd0f3\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 20 00:11:08 crc kubenswrapper[5107]: I0220 00:11:08.886925 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c5bf76b-3e95-425c-911a-5078511bd0f3-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"9c5bf76b-3e95-425c-911a-5078511bd0f3\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 20 00:11:08 crc kubenswrapper[5107]: I0220 00:11:08.887030 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9c5bf76b-3e95-425c-911a-5078511bd0f3-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"9c5bf76b-3e95-425c-911a-5078511bd0f3\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 20 00:11:08 crc kubenswrapper[5107]: I0220 00:11:08.924333 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c5bf76b-3e95-425c-911a-5078511bd0f3-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"9c5bf76b-3e95-425c-911a-5078511bd0f3\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 20 00:11:08 crc kubenswrapper[5107]: I0220 00:11:08.968832 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 20 00:11:10 crc kubenswrapper[5107]: I0220 00:11:10.900009 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Feb 20 00:11:11 crc kubenswrapper[5107]: I0220 00:11:11.214733 5107 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-szvd6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 20 00:11:11 crc kubenswrapper[5107]: I0220 00:11:11.214891 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-szvd6" podUID="c4f6c375-0a3f-4a66-908a-ac8180dba919" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 20 00:11:11 crc kubenswrapper[5107]: I0220 00:11:11.215033 5107 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-stc6t container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 20 00:11:11 crc kubenswrapper[5107]: I0220 00:11:11.215311 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-stc6t" podUID="38c88f45-4bc8-4153-962b-f3449bbb53ad" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 20 00:11:12 crc kubenswrapper[5107]: I0220 00:11:12.786829 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 20 00:11:12 crc kubenswrapper[5107]: I0220 00:11:12.801356 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Feb 20 00:11:12 crc kubenswrapper[5107]: I0220 00:11:12.861605 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a285d85f-6695-407b-aed4-2050b9a32b34-kube-api-access\") pod \"installer-12-crc\" (UID: \"a285d85f-6695-407b-aed4-2050b9a32b34\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 20 00:11:12 crc kubenswrapper[5107]: I0220 00:11:12.861769 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a285d85f-6695-407b-aed4-2050b9a32b34-kubelet-dir\") pod \"installer-12-crc\" (UID: \"a285d85f-6695-407b-aed4-2050b9a32b34\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 20 00:11:12 crc kubenswrapper[5107]: I0220 00:11:12.861851 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a285d85f-6695-407b-aed4-2050b9a32b34-var-lock\") pod \"installer-12-crc\" (UID: \"a285d85f-6695-407b-aed4-2050b9a32b34\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 20 00:11:12 crc kubenswrapper[5107]: I0220 00:11:12.963228 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a285d85f-6695-407b-aed4-2050b9a32b34-var-lock\") pod \"installer-12-crc\" (UID: \"a285d85f-6695-407b-aed4-2050b9a32b34\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 20 00:11:12 crc kubenswrapper[5107]: I0220 00:11:12.963407 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a285d85f-6695-407b-aed4-2050b9a32b34-var-lock\") pod \"installer-12-crc\" (UID: \"a285d85f-6695-407b-aed4-2050b9a32b34\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 20 00:11:12 crc kubenswrapper[5107]: I0220 00:11:12.963432 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a285d85f-6695-407b-aed4-2050b9a32b34-kube-api-access\") pod \"installer-12-crc\" (UID: \"a285d85f-6695-407b-aed4-2050b9a32b34\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 20 00:11:12 crc kubenswrapper[5107]: I0220 00:11:12.963636 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a285d85f-6695-407b-aed4-2050b9a32b34-kubelet-dir\") pod \"installer-12-crc\" (UID: \"a285d85f-6695-407b-aed4-2050b9a32b34\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 20 00:11:12 crc kubenswrapper[5107]: I0220 00:11:12.963807 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a285d85f-6695-407b-aed4-2050b9a32b34-kubelet-dir\") pod \"installer-12-crc\" (UID: \"a285d85f-6695-407b-aed4-2050b9a32b34\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 20 00:11:12 crc kubenswrapper[5107]: I0220 00:11:12.998752 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a285d85f-6695-407b-aed4-2050b9a32b34-kube-api-access\") pod \"installer-12-crc\" (UID: \"a285d85f-6695-407b-aed4-2050b9a32b34\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.128387 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.486099 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29525760-tv4jx" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.534341 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-szvd6" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.565339 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-58dd86879d-ntsvq"] Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.565879 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1094e93d-2606-43c0-8b23-334bab811610" containerName="image-pruner" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.565890 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="1094e93d-2606-43c0-8b23-334bab811610" containerName="image-pruner" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.565908 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c4f6c375-0a3f-4a66-908a-ac8180dba919" containerName="controller-manager" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.565914 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4f6c375-0a3f-4a66-908a-ac8180dba919" containerName="controller-manager" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.566001 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="1094e93d-2606-43c0-8b23-334bab811610" containerName="image-pruner" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.566011 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="c4f6c375-0a3f-4a66-908a-ac8180dba919" containerName="controller-manager" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.569173 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58dd86879d-ntsvq" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.571582 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4f6c375-0a3f-4a66-908a-ac8180dba919-serving-cert\") pod \"c4f6c375-0a3f-4a66-908a-ac8180dba919\" (UID: \"c4f6c375-0a3f-4a66-908a-ac8180dba919\") " Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.571649 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4t26\" (UniqueName: \"kubernetes.io/projected/1094e93d-2606-43c0-8b23-334bab811610-kube-api-access-d4t26\") pod \"1094e93d-2606-43c0-8b23-334bab811610\" (UID: \"1094e93d-2606-43c0-8b23-334bab811610\") " Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.571708 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c4f6c375-0a3f-4a66-908a-ac8180dba919-tmp\") pod \"c4f6c375-0a3f-4a66-908a-ac8180dba919\" (UID: \"c4f6c375-0a3f-4a66-908a-ac8180dba919\") " Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.571763 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c4f6c375-0a3f-4a66-908a-ac8180dba919-proxy-ca-bundles\") pod \"c4f6c375-0a3f-4a66-908a-ac8180dba919\" (UID: \"c4f6c375-0a3f-4a66-908a-ac8180dba919\") " Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.571802 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87t9z\" (UniqueName: \"kubernetes.io/projected/c4f6c375-0a3f-4a66-908a-ac8180dba919-kube-api-access-87t9z\") pod \"c4f6c375-0a3f-4a66-908a-ac8180dba919\" (UID: \"c4f6c375-0a3f-4a66-908a-ac8180dba919\") " Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.571844 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/1094e93d-2606-43c0-8b23-334bab811610-serviceca\") pod \"1094e93d-2606-43c0-8b23-334bab811610\" (UID: \"1094e93d-2606-43c0-8b23-334bab811610\") " Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.571874 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c4f6c375-0a3f-4a66-908a-ac8180dba919-client-ca\") pod \"c4f6c375-0a3f-4a66-908a-ac8180dba919\" (UID: \"c4f6c375-0a3f-4a66-908a-ac8180dba919\") " Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.571922 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4f6c375-0a3f-4a66-908a-ac8180dba919-config\") pod \"c4f6c375-0a3f-4a66-908a-ac8180dba919\" (UID: \"c4f6c375-0a3f-4a66-908a-ac8180dba919\") " Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.572938 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4f6c375-0a3f-4a66-908a-ac8180dba919-client-ca" (OuterVolumeSpecName: "client-ca") pod "c4f6c375-0a3f-4a66-908a-ac8180dba919" (UID: "c4f6c375-0a3f-4a66-908a-ac8180dba919"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.572945 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1094e93d-2606-43c0-8b23-334bab811610-serviceca" (OuterVolumeSpecName: "serviceca") pod "1094e93d-2606-43c0-8b23-334bab811610" (UID: "1094e93d-2606-43c0-8b23-334bab811610"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.573254 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4f6c375-0a3f-4a66-908a-ac8180dba919-tmp" (OuterVolumeSpecName: "tmp") pod "c4f6c375-0a3f-4a66-908a-ac8180dba919" (UID: "c4f6c375-0a3f-4a66-908a-ac8180dba919"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.573548 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4f6c375-0a3f-4a66-908a-ac8180dba919-config" (OuterVolumeSpecName: "config") pod "c4f6c375-0a3f-4a66-908a-ac8180dba919" (UID: "c4f6c375-0a3f-4a66-908a-ac8180dba919"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.573567 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4f6c375-0a3f-4a66-908a-ac8180dba919-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c4f6c375-0a3f-4a66-908a-ac8180dba919" (UID: "c4f6c375-0a3f-4a66-908a-ac8180dba919"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.584312 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4f6c375-0a3f-4a66-908a-ac8180dba919-kube-api-access-87t9z" (OuterVolumeSpecName: "kube-api-access-87t9z") pod "c4f6c375-0a3f-4a66-908a-ac8180dba919" (UID: "c4f6c375-0a3f-4a66-908a-ac8180dba919"). InnerVolumeSpecName "kube-api-access-87t9z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.586756 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1094e93d-2606-43c0-8b23-334bab811610-kube-api-access-d4t26" (OuterVolumeSpecName: "kube-api-access-d4t26") pod "1094e93d-2606-43c0-8b23-334bab811610" (UID: "1094e93d-2606-43c0-8b23-334bab811610"). InnerVolumeSpecName "kube-api-access-d4t26". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.594315 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4f6c375-0a3f-4a66-908a-ac8180dba919-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c4f6c375-0a3f-4a66-908a-ac8180dba919" (UID: "c4f6c375-0a3f-4a66-908a-ac8180dba919"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.596538 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58dd86879d-ntsvq"] Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.646309 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-stc6t" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.674768 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38c88f45-4bc8-4153-962b-f3449bbb53ad-client-ca\") pod \"38c88f45-4bc8-4153-962b-f3449bbb53ad\" (UID: \"38c88f45-4bc8-4153-962b-f3449bbb53ad\") " Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.674844 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38c88f45-4bc8-4153-962b-f3449bbb53ad-serving-cert\") pod \"38c88f45-4bc8-4153-962b-f3449bbb53ad\" (UID: \"38c88f45-4bc8-4153-962b-f3449bbb53ad\") " Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.674918 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38c88f45-4bc8-4153-962b-f3449bbb53ad-config\") pod \"38c88f45-4bc8-4153-962b-f3449bbb53ad\" (UID: \"38c88f45-4bc8-4153-962b-f3449bbb53ad\") " Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.674942 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/38c88f45-4bc8-4153-962b-f3449bbb53ad-tmp\") pod \"38c88f45-4bc8-4153-962b-f3449bbb53ad\" (UID: \"38c88f45-4bc8-4153-962b-f3449bbb53ad\") " Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.675061 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dl7zz\" (UniqueName: \"kubernetes.io/projected/38c88f45-4bc8-4153-962b-f3449bbb53ad-kube-api-access-dl7zz\") pod \"38c88f45-4bc8-4153-962b-f3449bbb53ad\" (UID: \"38c88f45-4bc8-4153-962b-f3449bbb53ad\") " Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.675272 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fef25fa-89ed-4fb1-acff-af76da291a0c-config\") pod \"controller-manager-58dd86879d-ntsvq\" (UID: \"5fef25fa-89ed-4fb1-acff-af76da291a0c\") " pod="openshift-controller-manager/controller-manager-58dd86879d-ntsvq" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.676375 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fef25fa-89ed-4fb1-acff-af76da291a0c-serving-cert\") pod \"controller-manager-58dd86879d-ntsvq\" (UID: \"5fef25fa-89ed-4fb1-acff-af76da291a0c\") " pod="openshift-controller-manager/controller-manager-58dd86879d-ntsvq" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.676416 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5fef25fa-89ed-4fb1-acff-af76da291a0c-tmp\") pod \"controller-manager-58dd86879d-ntsvq\" (UID: \"5fef25fa-89ed-4fb1-acff-af76da291a0c\") " pod="openshift-controller-manager/controller-manager-58dd86879d-ntsvq" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.676441 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfwb5\" (UniqueName: \"kubernetes.io/projected/5fef25fa-89ed-4fb1-acff-af76da291a0c-kube-api-access-cfwb5\") pod \"controller-manager-58dd86879d-ntsvq\" (UID: \"5fef25fa-89ed-4fb1-acff-af76da291a0c\") " pod="openshift-controller-manager/controller-manager-58dd86879d-ntsvq" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.676534 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5fef25fa-89ed-4fb1-acff-af76da291a0c-client-ca\") pod \"controller-manager-58dd86879d-ntsvq\" (UID: \"5fef25fa-89ed-4fb1-acff-af76da291a0c\") " pod="openshift-controller-manager/controller-manager-58dd86879d-ntsvq" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.676636 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5fef25fa-89ed-4fb1-acff-af76da291a0c-proxy-ca-bundles\") pod \"controller-manager-58dd86879d-ntsvq\" (UID: \"5fef25fa-89ed-4fb1-acff-af76da291a0c\") " pod="openshift-controller-manager/controller-manager-58dd86879d-ntsvq" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.676709 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4f6c375-0a3f-4a66-908a-ac8180dba919-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.676721 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4t26\" (UniqueName: \"kubernetes.io/projected/1094e93d-2606-43c0-8b23-334bab811610-kube-api-access-d4t26\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.676730 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c4f6c375-0a3f-4a66-908a-ac8180dba919-tmp\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.676738 5107 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c4f6c375-0a3f-4a66-908a-ac8180dba919-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.676746 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-87t9z\" (UniqueName: \"kubernetes.io/projected/c4f6c375-0a3f-4a66-908a-ac8180dba919-kube-api-access-87t9z\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.676774 5107 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/1094e93d-2606-43c0-8b23-334bab811610-serviceca\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.676782 5107 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c4f6c375-0a3f-4a66-908a-ac8180dba919-client-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.676790 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4f6c375-0a3f-4a66-908a-ac8180dba919-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.677555 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38c88f45-4bc8-4153-962b-f3449bbb53ad-client-ca" (OuterVolumeSpecName: "client-ca") pod "38c88f45-4bc8-4153-962b-f3449bbb53ad" (UID: "38c88f45-4bc8-4153-962b-f3449bbb53ad"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.679554 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38c88f45-4bc8-4153-962b-f3449bbb53ad-config" (OuterVolumeSpecName: "config") pod "38c88f45-4bc8-4153-962b-f3449bbb53ad" (UID: "38c88f45-4bc8-4153-962b-f3449bbb53ad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.682617 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38c88f45-4bc8-4153-962b-f3449bbb53ad-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "38c88f45-4bc8-4153-962b-f3449bbb53ad" (UID: "38c88f45-4bc8-4153-962b-f3449bbb53ad"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.682090 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38c88f45-4bc8-4153-962b-f3449bbb53ad-tmp" (OuterVolumeSpecName: "tmp") pod "38c88f45-4bc8-4153-962b-f3449bbb53ad" (UID: "38c88f45-4bc8-4153-962b-f3449bbb53ad"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.697581 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38c88f45-4bc8-4153-962b-f3449bbb53ad-kube-api-access-dl7zz" (OuterVolumeSpecName: "kube-api-access-dl7zz") pod "38c88f45-4bc8-4153-962b-f3449bbb53ad" (UID: "38c88f45-4bc8-4153-962b-f3449bbb53ad"). InnerVolumeSpecName "kube-api-access-dl7zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.703750 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5894f84bfd-np8kz"] Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.705225 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="38c88f45-4bc8-4153-962b-f3449bbb53ad" containerName="route-controller-manager" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.705245 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="38c88f45-4bc8-4153-962b-f3449bbb53ad" containerName="route-controller-manager" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.705364 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="38c88f45-4bc8-4153-962b-f3449bbb53ad" containerName="route-controller-manager" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.714992 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5894f84bfd-np8kz"] Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.715120 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5894f84bfd-np8kz" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.777959 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138-config\") pod \"route-controller-manager-5894f84bfd-np8kz\" (UID: \"78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138\") " pod="openshift-route-controller-manager/route-controller-manager-5894f84bfd-np8kz" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.778027 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fef25fa-89ed-4fb1-acff-af76da291a0c-config\") pod \"controller-manager-58dd86879d-ntsvq\" (UID: \"5fef25fa-89ed-4fb1-acff-af76da291a0c\") " pod="openshift-controller-manager/controller-manager-58dd86879d-ntsvq" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.778068 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6v7k\" (UniqueName: \"kubernetes.io/projected/78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138-kube-api-access-x6v7k\") pod \"route-controller-manager-5894f84bfd-np8kz\" (UID: \"78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138\") " pod="openshift-route-controller-manager/route-controller-manager-5894f84bfd-np8kz" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.778103 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138-client-ca\") pod \"route-controller-manager-5894f84bfd-np8kz\" (UID: \"78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138\") " pod="openshift-route-controller-manager/route-controller-manager-5894f84bfd-np8kz" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.785787 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fef25fa-89ed-4fb1-acff-af76da291a0c-config\") pod \"controller-manager-58dd86879d-ntsvq\" (UID: \"5fef25fa-89ed-4fb1-acff-af76da291a0c\") " pod="openshift-controller-manager/controller-manager-58dd86879d-ntsvq" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.789123 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fef25fa-89ed-4fb1-acff-af76da291a0c-serving-cert\") pod \"controller-manager-58dd86879d-ntsvq\" (UID: \"5fef25fa-89ed-4fb1-acff-af76da291a0c\") " pod="openshift-controller-manager/controller-manager-58dd86879d-ntsvq" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.789264 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5fef25fa-89ed-4fb1-acff-af76da291a0c-tmp\") pod \"controller-manager-58dd86879d-ntsvq\" (UID: \"5fef25fa-89ed-4fb1-acff-af76da291a0c\") " pod="openshift-controller-manager/controller-manager-58dd86879d-ntsvq" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.789858 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5fef25fa-89ed-4fb1-acff-af76da291a0c-tmp\") pod \"controller-manager-58dd86879d-ntsvq\" (UID: \"5fef25fa-89ed-4fb1-acff-af76da291a0c\") " pod="openshift-controller-manager/controller-manager-58dd86879d-ntsvq" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.789962 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cfwb5\" (UniqueName: \"kubernetes.io/projected/5fef25fa-89ed-4fb1-acff-af76da291a0c-kube-api-access-cfwb5\") pod \"controller-manager-58dd86879d-ntsvq\" (UID: \"5fef25fa-89ed-4fb1-acff-af76da291a0c\") " pod="openshift-controller-manager/controller-manager-58dd86879d-ntsvq" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.790096 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138-tmp\") pod \"route-controller-manager-5894f84bfd-np8kz\" (UID: \"78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138\") " pod="openshift-route-controller-manager/route-controller-manager-5894f84bfd-np8kz" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.793072 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5fef25fa-89ed-4fb1-acff-af76da291a0c-client-ca\") pod \"controller-manager-58dd86879d-ntsvq\" (UID: \"5fef25fa-89ed-4fb1-acff-af76da291a0c\") " pod="openshift-controller-manager/controller-manager-58dd86879d-ntsvq" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.793758 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138-serving-cert\") pod \"route-controller-manager-5894f84bfd-np8kz\" (UID: \"78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138\") " pod="openshift-route-controller-manager/route-controller-manager-5894f84bfd-np8kz" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.793879 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5fef25fa-89ed-4fb1-acff-af76da291a0c-client-ca\") pod \"controller-manager-58dd86879d-ntsvq\" (UID: \"5fef25fa-89ed-4fb1-acff-af76da291a0c\") " pod="openshift-controller-manager/controller-manager-58dd86879d-ntsvq" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.794006 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5fef25fa-89ed-4fb1-acff-af76da291a0c-proxy-ca-bundles\") pod \"controller-manager-58dd86879d-ntsvq\" (UID: \"5fef25fa-89ed-4fb1-acff-af76da291a0c\") " pod="openshift-controller-manager/controller-manager-58dd86879d-ntsvq" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.795531 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38c88f45-4bc8-4153-962b-f3449bbb53ad-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.795551 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/38c88f45-4bc8-4153-962b-f3449bbb53ad-tmp\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.795569 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dl7zz\" (UniqueName: \"kubernetes.io/projected/38c88f45-4bc8-4153-962b-f3449bbb53ad-kube-api-access-dl7zz\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.795581 5107 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38c88f45-4bc8-4153-962b-f3449bbb53ad-client-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.795590 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38c88f45-4bc8-4153-962b-f3449bbb53ad-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.797252 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5fef25fa-89ed-4fb1-acff-af76da291a0c-proxy-ca-bundles\") pod \"controller-manager-58dd86879d-ntsvq\" (UID: \"5fef25fa-89ed-4fb1-acff-af76da291a0c\") " pod="openshift-controller-manager/controller-manager-58dd86879d-ntsvq" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.815098 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fef25fa-89ed-4fb1-acff-af76da291a0c-serving-cert\") pod \"controller-manager-58dd86879d-ntsvq\" (UID: \"5fef25fa-89ed-4fb1-acff-af76da291a0c\") " pod="openshift-controller-manager/controller-manager-58dd86879d-ntsvq" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.815910 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfwb5\" (UniqueName: \"kubernetes.io/projected/5fef25fa-89ed-4fb1-acff-af76da291a0c-kube-api-access-cfwb5\") pod \"controller-manager-58dd86879d-ntsvq\" (UID: \"5fef25fa-89ed-4fb1-acff-af76da291a0c\") " pod="openshift-controller-manager/controller-manager-58dd86879d-ntsvq" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.887615 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58dd86879d-ntsvq" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.896888 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138-config\") pod \"route-controller-manager-5894f84bfd-np8kz\" (UID: \"78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138\") " pod="openshift-route-controller-manager/route-controller-manager-5894f84bfd-np8kz" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.896941 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x6v7k\" (UniqueName: \"kubernetes.io/projected/78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138-kube-api-access-x6v7k\") pod \"route-controller-manager-5894f84bfd-np8kz\" (UID: \"78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138\") " pod="openshift-route-controller-manager/route-controller-manager-5894f84bfd-np8kz" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.896963 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138-client-ca\") pod \"route-controller-manager-5894f84bfd-np8kz\" (UID: \"78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138\") " pod="openshift-route-controller-manager/route-controller-manager-5894f84bfd-np8kz" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.897011 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138-tmp\") pod \"route-controller-manager-5894f84bfd-np8kz\" (UID: \"78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138\") " pod="openshift-route-controller-manager/route-controller-manager-5894f84bfd-np8kz" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.897041 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138-serving-cert\") pod \"route-controller-manager-5894f84bfd-np8kz\" (UID: \"78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138\") " pod="openshift-route-controller-manager/route-controller-manager-5894f84bfd-np8kz" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.899403 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138-config\") pod \"route-controller-manager-5894f84bfd-np8kz\" (UID: \"78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138\") " pod="openshift-route-controller-manager/route-controller-manager-5894f84bfd-np8kz" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.899734 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138-tmp\") pod \"route-controller-manager-5894f84bfd-np8kz\" (UID: \"78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138\") " pod="openshift-route-controller-manager/route-controller-manager-5894f84bfd-np8kz" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.909029 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138-client-ca\") pod \"route-controller-manager-5894f84bfd-np8kz\" (UID: \"78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138\") " pod="openshift-route-controller-manager/route-controller-manager-5894f84bfd-np8kz" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.916516 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138-serving-cert\") pod \"route-controller-manager-5894f84bfd-np8kz\" (UID: \"78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138\") " pod="openshift-route-controller-manager/route-controller-manager-5894f84bfd-np8kz" Feb 20 00:11:13 crc kubenswrapper[5107]: I0220 00:11:13.931753 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6v7k\" (UniqueName: \"kubernetes.io/projected/78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138-kube-api-access-x6v7k\") pod \"route-controller-manager-5894f84bfd-np8kz\" (UID: \"78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138\") " pod="openshift-route-controller-manager/route-controller-manager-5894f84bfd-np8kz" Feb 20 00:11:14 crc kubenswrapper[5107]: I0220 00:11:14.015980 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Feb 20 00:11:14 crc kubenswrapper[5107]: W0220 00:11:14.040363 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda285d85f_6695_407b_aed4_2050b9a32b34.slice/crio-a3b71ed5a3448b4b0c50726dff68cfe918d2a6cbbf249e30d68f58cd89c43ec3 WatchSource:0}: Error finding container a3b71ed5a3448b4b0c50726dff68cfe918d2a6cbbf249e30d68f58cd89c43ec3: Status 404 returned error can't find the container with id a3b71ed5a3448b4b0c50726dff68cfe918d2a6cbbf249e30d68f58cd89c43ec3 Feb 20 00:11:14 crc kubenswrapper[5107]: I0220 00:11:14.065936 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5894f84bfd-np8kz" Feb 20 00:11:14 crc kubenswrapper[5107]: I0220 00:11:14.129799 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Feb 20 00:11:14 crc kubenswrapper[5107]: I0220 00:11:14.243005 5107 generic.go:358] "Generic (PLEG): container finished" podID="afb3e406-6312-4e7e-bcaa-f3c532a0c1ea" containerID="1a28e860ab72374160cfcd2ec6e035484d98de78329d7ee68b152feab78b9518" exitCode=0 Feb 20 00:11:14 crc kubenswrapper[5107]: I0220 00:11:14.243152 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxqz9" event={"ID":"afb3e406-6312-4e7e-bcaa-f3c532a0c1ea","Type":"ContainerDied","Data":"1a28e860ab72374160cfcd2ec6e035484d98de78329d7ee68b152feab78b9518"} Feb 20 00:11:14 crc kubenswrapper[5107]: I0220 00:11:14.253368 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 20 00:11:14 crc kubenswrapper[5107]: I0220 00:11:14.284169 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"0860fb1002872b9679a5f53ddfb7bb7ea4bf0848b9a92fb159e28ca15ca7637a"} Feb 20 00:11:14 crc kubenswrapper[5107]: I0220 00:11:14.284847 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:11:14 crc kubenswrapper[5107]: I0220 00:11:14.292387 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-stc6t" event={"ID":"38c88f45-4bc8-4153-962b-f3449bbb53ad","Type":"ContainerDied","Data":"19d23b8edb7506b9de999af2320529e102f0861374b736ca2b045f7732478975"} Feb 20 00:11:14 crc kubenswrapper[5107]: I0220 00:11:14.292500 5107 scope.go:117] "RemoveContainer" containerID="f2645fd0f3e75391e6f4c115d95bb8c0c6b966ef1f116b66c79912c293657c0e" Feb 20 00:11:14 crc kubenswrapper[5107]: I0220 00:11:14.292716 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-stc6t" Feb 20 00:11:14 crc kubenswrapper[5107]: I0220 00:11:14.376591 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=72.376575313 podStartE2EDuration="1m12.376575313s" podCreationTimestamp="2026-02-20 00:10:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:11:14.349791374 +0000 UTC m=+160.718449060" watchObservedRunningTime="2026-02-20 00:11:14.376575313 +0000 UTC m=+160.745232889" Feb 20 00:11:14 crc kubenswrapper[5107]: I0220 00:11:14.378598 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-stc6t"] Feb 20 00:11:14 crc kubenswrapper[5107]: I0220 00:11:14.380387 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-stc6t"] Feb 20 00:11:14 crc kubenswrapper[5107]: I0220 00:11:14.381422 5107 generic.go:358] "Generic (PLEG): container finished" podID="67484e0a-1c92-441c-9b19-892f98f62176" containerID="7d275b3ab9f0f5b68bb4309db2aa298171377798335761ad02c92eaba5a68436" exitCode=0 Feb 20 00:11:14 crc kubenswrapper[5107]: I0220 00:11:14.381535 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ct7wl" event={"ID":"67484e0a-1c92-441c-9b19-892f98f62176","Type":"ContainerDied","Data":"7d275b3ab9f0f5b68bb4309db2aa298171377798335761ad02c92eaba5a68436"} Feb 20 00:11:14 crc kubenswrapper[5107]: I0220 00:11:14.395536 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rf4ps" event={"ID":"e148c20e-1d85-4049-b800-a0f1a42fd1ed","Type":"ContainerStarted","Data":"a857daa07c1f6658ceabbe65eb5c58ded337dbd1371e3f9687a5cd7820102a9d"} Feb 20 00:11:14 crc kubenswrapper[5107]: I0220 00:11:14.406624 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-szvd6" Feb 20 00:11:14 crc kubenswrapper[5107]: I0220 00:11:14.406916 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-szvd6" event={"ID":"c4f6c375-0a3f-4a66-908a-ac8180dba919","Type":"ContainerDied","Data":"2494ccf44b78b81a06691452e59d0d54c5501b6b3279db79cff065ecb6b628a9"} Feb 20 00:11:14 crc kubenswrapper[5107]: I0220 00:11:14.406984 5107 scope.go:117] "RemoveContainer" containerID="b9e636dc729eb1510c99f1000be11203ed0f0cdaa6e898b0ab477523bde5c7d6" Feb 20 00:11:14 crc kubenswrapper[5107]: I0220 00:11:14.414334 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"9c5bf76b-3e95-425c-911a-5078511bd0f3","Type":"ContainerStarted","Data":"4e7da0116e8afcaf68a00d5fc19c78d93c7135399351ca36822eda22fdc4b3ea"} Feb 20 00:11:14 crc kubenswrapper[5107]: I0220 00:11:14.418896 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58dd86879d-ntsvq"] Feb 20 00:11:14 crc kubenswrapper[5107]: I0220 00:11:14.420754 5107 generic.go:358] "Generic (PLEG): container finished" podID="582a976b-611f-4153-9c08-eb9f343b290f" containerID="9e4f009a603aa27099856ec95dbc0e4a5c64e68c9f2a5c2059cb7f3563cdf21d" exitCode=0 Feb 20 00:11:14 crc kubenswrapper[5107]: I0220 00:11:14.420835 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rc7nq" event={"ID":"582a976b-611f-4153-9c08-eb9f343b290f","Type":"ContainerDied","Data":"9e4f009a603aa27099856ec95dbc0e4a5c64e68c9f2a5c2059cb7f3563cdf21d"} Feb 20 00:11:14 crc kubenswrapper[5107]: I0220 00:11:14.436248 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29525760-tv4jx" Feb 20 00:11:14 crc kubenswrapper[5107]: I0220 00:11:14.436847 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29525760-tv4jx" event={"ID":"1094e93d-2606-43c0-8b23-334bab811610","Type":"ContainerDied","Data":"7270978b4cb1c772b8f829a5c4757213085ec222c7b2fd40f2560389d4de7df1"} Feb 20 00:11:14 crc kubenswrapper[5107]: I0220 00:11:14.436890 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7270978b4cb1c772b8f829a5c4757213085ec222c7b2fd40f2560389d4de7df1" Feb 20 00:11:14 crc kubenswrapper[5107]: I0220 00:11:14.463775 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-szvd6"] Feb 20 00:11:14 crc kubenswrapper[5107]: I0220 00:11:14.472107 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-szvd6"] Feb 20 00:11:14 crc kubenswrapper[5107]: I0220 00:11:14.495561 5107 generic.go:358] "Generic (PLEG): container finished" podID="873048c2-5622-40a5-be53-dbdbca3b95a7" containerID="95857bd6374c565c234da2e7ce3eb9c45491ca331ca04a200cfac36f782b14df" exitCode=0 Feb 20 00:11:14 crc kubenswrapper[5107]: W0220 00:11:14.497485 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fef25fa_89ed_4fb1_acff_af76da291a0c.slice/crio-ddc90a6a8ee235501e143bf75b2a63329438f08473a88bd869a2ed087aef3e2a WatchSource:0}: Error finding container ddc90a6a8ee235501e143bf75b2a63329438f08473a88bd869a2ed087aef3e2a: Status 404 returned error can't find the container with id ddc90a6a8ee235501e143bf75b2a63329438f08473a88bd869a2ed087aef3e2a Feb 20 00:11:14 crc kubenswrapper[5107]: I0220 00:11:14.512164 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38c88f45-4bc8-4153-962b-f3449bbb53ad" path="/var/lib/kubelet/pods/38c88f45-4bc8-4153-962b-f3449bbb53ad/volumes" Feb 20 00:11:14 crc kubenswrapper[5107]: I0220 00:11:14.517035 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4f6c375-0a3f-4a66-908a-ac8180dba919" path="/var/lib/kubelet/pods/c4f6c375-0a3f-4a66-908a-ac8180dba919/volumes" Feb 20 00:11:14 crc kubenswrapper[5107]: I0220 00:11:14.517898 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z54dq" event={"ID":"69b44045-c596-43d0-bf80-5e5c89671bef","Type":"ContainerStarted","Data":"433ff21ffd04ad886fdf049a414f5a28b86c12d4187ad152631bb77dd0226554"} Feb 20 00:11:14 crc kubenswrapper[5107]: I0220 00:11:14.517924 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2bt6" event={"ID":"873048c2-5622-40a5-be53-dbdbca3b95a7","Type":"ContainerDied","Data":"95857bd6374c565c234da2e7ce3eb9c45491ca331ca04a200cfac36f782b14df"} Feb 20 00:11:14 crc kubenswrapper[5107]: I0220 00:11:14.537266 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-j2l2p" event={"ID":"cee716c2-1a9a-4944-9b9f-06284973b167","Type":"ContainerStarted","Data":"9f566fbcf7be5629a77417a6b69b2aa7d7b7a74dd85dfe02c510b9dad82caac2"} Feb 20 00:11:14 crc kubenswrapper[5107]: I0220 00:11:14.556158 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"a285d85f-6695-407b-aed4-2050b9a32b34","Type":"ContainerStarted","Data":"a3b71ed5a3448b4b0c50726dff68cfe918d2a6cbbf249e30d68f58cd89c43ec3"} Feb 20 00:11:14 crc kubenswrapper[5107]: I0220 00:11:14.618283 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5894f84bfd-np8kz"] Feb 20 00:11:14 crc kubenswrapper[5107]: I0220 00:11:14.670341 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-j2l2p" podStartSLOduration=138.670318704 podStartE2EDuration="2m18.670318704s" podCreationTimestamp="2026-02-20 00:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:11:14.644633707 +0000 UTC m=+161.013291273" watchObservedRunningTime="2026-02-20 00:11:14.670318704 +0000 UTC m=+161.038976270" Feb 20 00:11:14 crc kubenswrapper[5107]: E0220 00:11:14.854633 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b9e64a7933b7566bee3689cad101d004830ea2f6dc864c2ef830802020aef0cf is running failed: container process not found" containerID="b9e64a7933b7566bee3689cad101d004830ea2f6dc864c2ef830802020aef0cf" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 00:11:14 crc kubenswrapper[5107]: E0220 00:11:14.855355 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b9e64a7933b7566bee3689cad101d004830ea2f6dc864c2ef830802020aef0cf is running failed: container process not found" containerID="b9e64a7933b7566bee3689cad101d004830ea2f6dc864c2ef830802020aef0cf" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 00:11:14 crc kubenswrapper[5107]: E0220 00:11:14.856396 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b9e64a7933b7566bee3689cad101d004830ea2f6dc864c2ef830802020aef0cf is running failed: container process not found" containerID="b9e64a7933b7566bee3689cad101d004830ea2f6dc864c2ef830802020aef0cf" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 00:11:14 crc kubenswrapper[5107]: E0220 00:11:14.856437 5107 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b9e64a7933b7566bee3689cad101d004830ea2f6dc864c2ef830802020aef0cf is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-wrrss" podUID="01d70318-38f6-4dc0-acc4-36458ccf419c" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Feb 20 00:11:15 crc kubenswrapper[5107]: I0220 00:11:15.563457 5107 generic.go:358] "Generic (PLEG): container finished" podID="9c5bf76b-3e95-425c-911a-5078511bd0f3" containerID="eea4652eed7ee9c9931058c8e033a04d262db1015313d491442850509cc86b6c" exitCode=0 Feb 20 00:11:15 crc kubenswrapper[5107]: I0220 00:11:15.563517 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"9c5bf76b-3e95-425c-911a-5078511bd0f3","Type":"ContainerDied","Data":"eea4652eed7ee9c9931058c8e033a04d262db1015313d491442850509cc86b6c"} Feb 20 00:11:15 crc kubenswrapper[5107]: I0220 00:11:15.565549 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rc7nq" event={"ID":"582a976b-611f-4153-9c08-eb9f343b290f","Type":"ContainerStarted","Data":"9a025670c3da79f3dc4e5ace412780ba57fb4632b43794f709f770b864f66566"} Feb 20 00:11:15 crc kubenswrapper[5107]: I0220 00:11:15.568379 5107 generic.go:358] "Generic (PLEG): container finished" podID="58158355-aadb-4a44-8f4e-3e0c20d702e6" containerID="60fc24dc144419deefadd9787069c5e9b791f93567a2b927eb1d1c555f8c56b8" exitCode=0 Feb 20 00:11:15 crc kubenswrapper[5107]: I0220 00:11:15.568447 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvrxj" event={"ID":"58158355-aadb-4a44-8f4e-3e0c20d702e6","Type":"ContainerDied","Data":"60fc24dc144419deefadd9787069c5e9b791f93567a2b927eb1d1c555f8c56b8"} Feb 20 00:11:15 crc kubenswrapper[5107]: I0220 00:11:15.570095 5107 generic.go:358] "Generic (PLEG): container finished" podID="69b44045-c596-43d0-bf80-5e5c89671bef" containerID="433ff21ffd04ad886fdf049a414f5a28b86c12d4187ad152631bb77dd0226554" exitCode=0 Feb 20 00:11:15 crc kubenswrapper[5107]: I0220 00:11:15.570161 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z54dq" event={"ID":"69b44045-c596-43d0-bf80-5e5c89671bef","Type":"ContainerDied","Data":"433ff21ffd04ad886fdf049a414f5a28b86c12d4187ad152631bb77dd0226554"} Feb 20 00:11:15 crc kubenswrapper[5107]: I0220 00:11:15.570176 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z54dq" event={"ID":"69b44045-c596-43d0-bf80-5e5c89671bef","Type":"ContainerStarted","Data":"89226f993387ddc2de766bc4a863990533c3d9bfe594e6f090aa7df201d71ab0"} Feb 20 00:11:15 crc kubenswrapper[5107]: I0220 00:11:15.575722 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2bt6" event={"ID":"873048c2-5622-40a5-be53-dbdbca3b95a7","Type":"ContainerStarted","Data":"7d5ad420e49354e2a1850815dd80fe500286af1d220dd8cfec505ec3676baa32"} Feb 20 00:11:15 crc kubenswrapper[5107]: I0220 00:11:15.578015 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"a285d85f-6695-407b-aed4-2050b9a32b34","Type":"ContainerStarted","Data":"061642bbba5010e3268842502df49978596b582532a514d0c2ee5ce7d2b0026e"} Feb 20 00:11:15 crc kubenswrapper[5107]: I0220 00:11:15.580778 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxqz9" event={"ID":"afb3e406-6312-4e7e-bcaa-f3c532a0c1ea","Type":"ContainerStarted","Data":"692427aaf99f853b5e81e55100a6ca9bc5a71be4ef10ab4b3772882f52dff4e9"} Feb 20 00:11:15 crc kubenswrapper[5107]: I0220 00:11:15.582700 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5894f84bfd-np8kz" event={"ID":"78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138","Type":"ContainerStarted","Data":"2ec793c0b41108e2992b070edf3f5a7310e1ac6e9f8dd8e6309ec160246715d0"} Feb 20 00:11:15 crc kubenswrapper[5107]: I0220 00:11:15.582748 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5894f84bfd-np8kz" event={"ID":"78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138","Type":"ContainerStarted","Data":"5ff274e90d4e227f62454a3b42241614ae95e65a3e30ca5c47dece1095f9a49a"} Feb 20 00:11:15 crc kubenswrapper[5107]: I0220 00:11:15.584269 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-5894f84bfd-np8kz" Feb 20 00:11:15 crc kubenswrapper[5107]: I0220 00:11:15.592964 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5894f84bfd-np8kz" Feb 20 00:11:15 crc kubenswrapper[5107]: I0220 00:11:15.593183 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ct7wl" event={"ID":"67484e0a-1c92-441c-9b19-892f98f62176","Type":"ContainerStarted","Data":"9fc32ffe82b6fae16de2f089d6987aadaf4da72ac2f700f80fecde4374689cf2"} Feb 20 00:11:15 crc kubenswrapper[5107]: I0220 00:11:15.626577 5107 generic.go:358] "Generic (PLEG): container finished" podID="e148c20e-1d85-4049-b800-a0f1a42fd1ed" containerID="a857daa07c1f6658ceabbe65eb5c58ded337dbd1371e3f9687a5cd7820102a9d" exitCode=0 Feb 20 00:11:15 crc kubenswrapper[5107]: I0220 00:11:15.626720 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rf4ps" event={"ID":"e148c20e-1d85-4049-b800-a0f1a42fd1ed","Type":"ContainerDied","Data":"a857daa07c1f6658ceabbe65eb5c58ded337dbd1371e3f9687a5cd7820102a9d"} Feb 20 00:11:15 crc kubenswrapper[5107]: I0220 00:11:15.626750 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rf4ps" event={"ID":"e148c20e-1d85-4049-b800-a0f1a42fd1ed","Type":"ContainerStarted","Data":"e24200a37f5d612ce08ac464ef8d0b4f3ca393ad814ec855360bd86585173186"} Feb 20 00:11:15 crc kubenswrapper[5107]: I0220 00:11:15.641441 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58dd86879d-ntsvq" event={"ID":"5fef25fa-89ed-4fb1-acff-af76da291a0c","Type":"ContainerStarted","Data":"149fad8151c9afb5d0de36bdfaec7d102dd604df90497afe05e6d70d09aaca33"} Feb 20 00:11:15 crc kubenswrapper[5107]: I0220 00:11:15.641503 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58dd86879d-ntsvq" event={"ID":"5fef25fa-89ed-4fb1-acff-af76da291a0c","Type":"ContainerStarted","Data":"ddc90a6a8ee235501e143bf75b2a63329438f08473a88bd869a2ed087aef3e2a"} Feb 20 00:11:15 crc kubenswrapper[5107]: I0220 00:11:15.643540 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-58dd86879d-ntsvq" Feb 20 00:11:15 crc kubenswrapper[5107]: I0220 00:11:15.648312 5107 generic.go:358] "Generic (PLEG): container finished" podID="9613fac6-e4cf-4553-b8a7-7b52986c7e27" containerID="5ff035e9fb87dfbc120765af007d29f5902cf09db582fde758355626d8d3fcb4" exitCode=0 Feb 20 00:11:15 crc kubenswrapper[5107]: I0220 00:11:15.648388 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4kmt8" event={"ID":"9613fac6-e4cf-4553-b8a7-7b52986c7e27","Type":"ContainerDied","Data":"5ff035e9fb87dfbc120765af007d29f5902cf09db582fde758355626d8d3fcb4"} Feb 20 00:11:15 crc kubenswrapper[5107]: I0220 00:11:15.661251 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rc7nq" podStartSLOduration=3.948582776 podStartE2EDuration="45.661234101s" podCreationTimestamp="2026-02-20 00:10:30 +0000 UTC" firstStartedPulling="2026-02-20 00:10:31.847608729 +0000 UTC m=+118.216266295" lastFinishedPulling="2026-02-20 00:11:13.560260054 +0000 UTC m=+159.928917620" observedRunningTime="2026-02-20 00:11:15.658651605 +0000 UTC m=+162.027309171" watchObservedRunningTime="2026-02-20 00:11:15.661234101 +0000 UTC m=+162.029891667" Feb 20 00:11:15 crc kubenswrapper[5107]: I0220 00:11:15.694989 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5894f84bfd-np8kz" podStartSLOduration=9.694973186 podStartE2EDuration="9.694973186s" podCreationTimestamp="2026-02-20 00:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:11:15.691262707 +0000 UTC m=+162.059920273" watchObservedRunningTime="2026-02-20 00:11:15.694973186 +0000 UTC m=+162.063630752" Feb 20 00:11:15 crc kubenswrapper[5107]: I0220 00:11:15.710190 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rf4ps" podStartSLOduration=3.911950203 podStartE2EDuration="47.710170604s" podCreationTimestamp="2026-02-20 00:10:28 +0000 UTC" firstStartedPulling="2026-02-20 00:10:29.812518242 +0000 UTC m=+116.181175818" lastFinishedPulling="2026-02-20 00:11:13.610738643 +0000 UTC m=+159.979396219" observedRunningTime="2026-02-20 00:11:15.708361471 +0000 UTC m=+162.077019037" watchObservedRunningTime="2026-02-20 00:11:15.710170604 +0000 UTC m=+162.078828170" Feb 20 00:11:15 crc kubenswrapper[5107]: I0220 00:11:15.788660 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z54dq" podStartSLOduration=4.130720207 podStartE2EDuration="47.788643258s" podCreationTimestamp="2026-02-20 00:10:28 +0000 UTC" firstStartedPulling="2026-02-20 00:10:29.79335743 +0000 UTC m=+116.162014996" lastFinishedPulling="2026-02-20 00:11:13.451280481 +0000 UTC m=+159.819938047" observedRunningTime="2026-02-20 00:11:15.784395593 +0000 UTC m=+162.153053159" watchObservedRunningTime="2026-02-20 00:11:15.788643258 +0000 UTC m=+162.157300824" Feb 20 00:11:15 crc kubenswrapper[5107]: I0220 00:11:15.789331 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ct7wl" podStartSLOduration=4.18706867 podStartE2EDuration="45.789325858s" podCreationTimestamp="2026-02-20 00:10:30 +0000 UTC" firstStartedPulling="2026-02-20 00:10:31.846362815 +0000 UTC m=+118.215020381" lastFinishedPulling="2026-02-20 00:11:13.448619983 +0000 UTC m=+159.817277569" observedRunningTime="2026-02-20 00:11:15.751567695 +0000 UTC m=+162.120225271" watchObservedRunningTime="2026-02-20 00:11:15.789325858 +0000 UTC m=+162.157983424" Feb 20 00:11:15 crc kubenswrapper[5107]: I0220 00:11:15.839392 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gxqz9" podStartSLOduration=4.208459282 podStartE2EDuration="46.839372124s" podCreationTimestamp="2026-02-20 00:10:29 +0000 UTC" firstStartedPulling="2026-02-20 00:10:30.820857184 +0000 UTC m=+117.189514750" lastFinishedPulling="2026-02-20 00:11:13.451770016 +0000 UTC m=+159.820427592" observedRunningTime="2026-02-20 00:11:15.81448582 +0000 UTC m=+162.183143386" watchObservedRunningTime="2026-02-20 00:11:15.839372124 +0000 UTC m=+162.208029690" Feb 20 00:11:15 crc kubenswrapper[5107]: I0220 00:11:15.856284 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=5.856264062 podStartE2EDuration="5.856264062s" podCreationTimestamp="2026-02-20 00:11:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:11:15.854419347 +0000 UTC m=+162.223076913" watchObservedRunningTime="2026-02-20 00:11:15.856264062 +0000 UTC m=+162.224921628" Feb 20 00:11:15 crc kubenswrapper[5107]: I0220 00:11:15.867905 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-58dd86879d-ntsvq" Feb 20 00:11:15 crc kubenswrapper[5107]: I0220 00:11:15.884545 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w2bt6" podStartSLOduration=4.233086305 podStartE2EDuration="47.884526975s" podCreationTimestamp="2026-02-20 00:10:28 +0000 UTC" firstStartedPulling="2026-02-20 00:10:29.799337396 +0000 UTC m=+116.167994962" lastFinishedPulling="2026-02-20 00:11:13.450778066 +0000 UTC m=+159.819435632" observedRunningTime="2026-02-20 00:11:15.882751683 +0000 UTC m=+162.251409249" watchObservedRunningTime="2026-02-20 00:11:15.884526975 +0000 UTC m=+162.253184541" Feb 20 00:11:15 crc kubenswrapper[5107]: I0220 00:11:15.928571 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-58dd86879d-ntsvq" podStartSLOduration=9.928552023 podStartE2EDuration="9.928552023s" podCreationTimestamp="2026-02-20 00:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:11:15.925938796 +0000 UTC m=+162.294596362" watchObservedRunningTime="2026-02-20 00:11:15.928552023 +0000 UTC m=+162.297209589" Feb 20 00:11:16 crc kubenswrapper[5107]: I0220 00:11:16.662381 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4kmt8" event={"ID":"9613fac6-e4cf-4553-b8a7-7b52986c7e27","Type":"ContainerStarted","Data":"85327b3fa52a2a7a4dfca8d471e54ecf75f9fdaee09ca81161b0f52b041c2c05"} Feb 20 00:11:16 crc kubenswrapper[5107]: I0220 00:11:16.666087 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvrxj" event={"ID":"58158355-aadb-4a44-8f4e-3e0c20d702e6","Type":"ContainerStarted","Data":"c62aacf712da536ac7dad93be8625a981f622a7e2acce6e8f57b50ef083e8c5d"} Feb 20 00:11:16 crc kubenswrapper[5107]: I0220 00:11:16.701546 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4kmt8" podStartSLOduration=5.958142448 podStartE2EDuration="45.701530653s" podCreationTimestamp="2026-02-20 00:10:31 +0000 UTC" firstStartedPulling="2026-02-20 00:10:33.886293656 +0000 UTC m=+120.254951212" lastFinishedPulling="2026-02-20 00:11:13.629681851 +0000 UTC m=+159.998339417" observedRunningTime="2026-02-20 00:11:16.685314315 +0000 UTC m=+163.053971881" watchObservedRunningTime="2026-02-20 00:11:16.701530653 +0000 UTC m=+163.070188219" Feb 20 00:11:16 crc kubenswrapper[5107]: I0220 00:11:16.703198 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jvrxj" podStartSLOduration=4.986251748 podStartE2EDuration="44.703189312s" podCreationTimestamp="2026-02-20 00:10:32 +0000 UTC" firstStartedPulling="2026-02-20 00:10:33.891996965 +0000 UTC m=+120.260654521" lastFinishedPulling="2026-02-20 00:11:13.608934519 +0000 UTC m=+159.977592085" observedRunningTime="2026-02-20 00:11:16.70107135 +0000 UTC m=+163.069728916" watchObservedRunningTime="2026-02-20 00:11:16.703189312 +0000 UTC m=+163.071846878" Feb 20 00:11:16 crc kubenswrapper[5107]: I0220 00:11:16.950007 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 20 00:11:17 crc kubenswrapper[5107]: I0220 00:11:17.066932 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9c5bf76b-3e95-425c-911a-5078511bd0f3-kubelet-dir\") pod \"9c5bf76b-3e95-425c-911a-5078511bd0f3\" (UID: \"9c5bf76b-3e95-425c-911a-5078511bd0f3\") " Feb 20 00:11:17 crc kubenswrapper[5107]: I0220 00:11:17.066998 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c5bf76b-3e95-425c-911a-5078511bd0f3-kube-api-access\") pod \"9c5bf76b-3e95-425c-911a-5078511bd0f3\" (UID: \"9c5bf76b-3e95-425c-911a-5078511bd0f3\") " Feb 20 00:11:17 crc kubenswrapper[5107]: I0220 00:11:17.067105 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c5bf76b-3e95-425c-911a-5078511bd0f3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9c5bf76b-3e95-425c-911a-5078511bd0f3" (UID: "9c5bf76b-3e95-425c-911a-5078511bd0f3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:11:17 crc kubenswrapper[5107]: I0220 00:11:17.067220 5107 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9c5bf76b-3e95-425c-911a-5078511bd0f3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:17 crc kubenswrapper[5107]: I0220 00:11:17.079053 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c5bf76b-3e95-425c-911a-5078511bd0f3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9c5bf76b-3e95-425c-911a-5078511bd0f3" (UID: "9c5bf76b-3e95-425c-911a-5078511bd0f3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:17 crc kubenswrapper[5107]: I0220 00:11:17.168565 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c5bf76b-3e95-425c-911a-5078511bd0f3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:17 crc kubenswrapper[5107]: I0220 00:11:17.671317 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"9c5bf76b-3e95-425c-911a-5078511bd0f3","Type":"ContainerDied","Data":"4e7da0116e8afcaf68a00d5fc19c78d93c7135399351ca36822eda22fdc4b3ea"} Feb 20 00:11:17 crc kubenswrapper[5107]: I0220 00:11:17.671349 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 20 00:11:17 crc kubenswrapper[5107]: I0220 00:11:17.671365 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e7da0116e8afcaf68a00d5fc19c78d93c7135399351ca36822eda22fdc4b3ea" Feb 20 00:11:18 crc kubenswrapper[5107]: I0220 00:11:18.896272 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-w2bt6" Feb 20 00:11:18 crc kubenswrapper[5107]: I0220 00:11:18.896324 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w2bt6" Feb 20 00:11:19 crc kubenswrapper[5107]: I0220 00:11:19.253293 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w2bt6" Feb 20 00:11:19 crc kubenswrapper[5107]: I0220 00:11:19.317108 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-rf4ps" Feb 20 00:11:19 crc kubenswrapper[5107]: I0220 00:11:19.317248 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rf4ps" Feb 20 00:11:19 crc kubenswrapper[5107]: I0220 00:11:19.323099 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-z54dq" Feb 20 00:11:19 crc kubenswrapper[5107]: I0220 00:11:19.323185 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z54dq" Feb 20 00:11:19 crc kubenswrapper[5107]: I0220 00:11:19.367776 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rf4ps" Feb 20 00:11:19 crc kubenswrapper[5107]: I0220 00:11:19.368954 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z54dq" Feb 20 00:11:19 crc kubenswrapper[5107]: I0220 00:11:19.518537 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gxqz9" Feb 20 00:11:19 crc kubenswrapper[5107]: I0220 00:11:19.518594 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-gxqz9" Feb 20 00:11:19 crc kubenswrapper[5107]: I0220 00:11:19.559894 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gxqz9" Feb 20 00:11:19 crc kubenswrapper[5107]: I0220 00:11:19.733301 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gxqz9" Feb 20 00:11:19 crc kubenswrapper[5107]: I0220 00:11:19.737223 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z54dq" Feb 20 00:11:19 crc kubenswrapper[5107]: I0220 00:11:19.751315 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rf4ps" Feb 20 00:11:19 crc kubenswrapper[5107]: I0220 00:11:19.762772 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w2bt6" Feb 20 00:11:20 crc kubenswrapper[5107]: I0220 00:11:20.477643 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gxqz9"] Feb 20 00:11:20 crc kubenswrapper[5107]: I0220 00:11:20.691430 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-wrrss_01d70318-38f6-4dc0-acc4-36458ccf419c/kube-multus-additional-cni-plugins/0.log" Feb 20 00:11:20 crc kubenswrapper[5107]: I0220 00:11:20.691733 5107 generic.go:358] "Generic (PLEG): container finished" podID="01d70318-38f6-4dc0-acc4-36458ccf419c" containerID="b9e64a7933b7566bee3689cad101d004830ea2f6dc864c2ef830802020aef0cf" exitCode=137 Feb 20 00:11:20 crc kubenswrapper[5107]: I0220 00:11:20.691836 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-wrrss" event={"ID":"01d70318-38f6-4dc0-acc4-36458ccf419c","Type":"ContainerDied","Data":"b9e64a7933b7566bee3689cad101d004830ea2f6dc864c2ef830802020aef0cf"} Feb 20 00:11:20 crc kubenswrapper[5107]: I0220 00:11:20.855575 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rc7nq" Feb 20 00:11:20 crc kubenswrapper[5107]: I0220 00:11:20.855699 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-rc7nq" Feb 20 00:11:20 crc kubenswrapper[5107]: I0220 00:11:20.909050 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rc7nq" Feb 20 00:11:21 crc kubenswrapper[5107]: I0220 00:11:21.113110 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-wrrss_01d70318-38f6-4dc0-acc4-36458ccf419c/kube-multus-additional-cni-plugins/0.log" Feb 20 00:11:21 crc kubenswrapper[5107]: I0220 00:11:21.113288 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-wrrss" Feb 20 00:11:21 crc kubenswrapper[5107]: I0220 00:11:21.222453 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/01d70318-38f6-4dc0-acc4-36458ccf419c-cni-sysctl-allowlist\") pod \"01d70318-38f6-4dc0-acc4-36458ccf419c\" (UID: \"01d70318-38f6-4dc0-acc4-36458ccf419c\") " Feb 20 00:11:21 crc kubenswrapper[5107]: I0220 00:11:21.222557 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/01d70318-38f6-4dc0-acc4-36458ccf419c-ready\") pod \"01d70318-38f6-4dc0-acc4-36458ccf419c\" (UID: \"01d70318-38f6-4dc0-acc4-36458ccf419c\") " Feb 20 00:11:21 crc kubenswrapper[5107]: I0220 00:11:21.222598 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/01d70318-38f6-4dc0-acc4-36458ccf419c-tuning-conf-dir\") pod \"01d70318-38f6-4dc0-acc4-36458ccf419c\" (UID: \"01d70318-38f6-4dc0-acc4-36458ccf419c\") " Feb 20 00:11:21 crc kubenswrapper[5107]: I0220 00:11:21.222680 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mjg8\" (UniqueName: \"kubernetes.io/projected/01d70318-38f6-4dc0-acc4-36458ccf419c-kube-api-access-6mjg8\") pod \"01d70318-38f6-4dc0-acc4-36458ccf419c\" (UID: \"01d70318-38f6-4dc0-acc4-36458ccf419c\") " Feb 20 00:11:21 crc kubenswrapper[5107]: I0220 00:11:21.222802 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01d70318-38f6-4dc0-acc4-36458ccf419c-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "01d70318-38f6-4dc0-acc4-36458ccf419c" (UID: "01d70318-38f6-4dc0-acc4-36458ccf419c"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:11:21 crc kubenswrapper[5107]: I0220 00:11:21.223222 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01d70318-38f6-4dc0-acc4-36458ccf419c-ready" (OuterVolumeSpecName: "ready") pod "01d70318-38f6-4dc0-acc4-36458ccf419c" (UID: "01d70318-38f6-4dc0-acc4-36458ccf419c"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:21 crc kubenswrapper[5107]: I0220 00:11:21.223260 5107 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/01d70318-38f6-4dc0-acc4-36458ccf419c-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:21 crc kubenswrapper[5107]: I0220 00:11:21.223332 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01d70318-38f6-4dc0-acc4-36458ccf419c-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "01d70318-38f6-4dc0-acc4-36458ccf419c" (UID: "01d70318-38f6-4dc0-acc4-36458ccf419c"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:21 crc kubenswrapper[5107]: I0220 00:11:21.232081 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01d70318-38f6-4dc0-acc4-36458ccf419c-kube-api-access-6mjg8" (OuterVolumeSpecName: "kube-api-access-6mjg8") pod "01d70318-38f6-4dc0-acc4-36458ccf419c" (UID: "01d70318-38f6-4dc0-acc4-36458ccf419c"). InnerVolumeSpecName "kube-api-access-6mjg8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:21 crc kubenswrapper[5107]: I0220 00:11:21.280693 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ct7wl" Feb 20 00:11:21 crc kubenswrapper[5107]: I0220 00:11:21.280776 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-ct7wl" Feb 20 00:11:21 crc kubenswrapper[5107]: I0220 00:11:21.324635 5107 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/01d70318-38f6-4dc0-acc4-36458ccf419c-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:21 crc kubenswrapper[5107]: I0220 00:11:21.324679 5107 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/01d70318-38f6-4dc0-acc4-36458ccf419c-ready\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:21 crc kubenswrapper[5107]: I0220 00:11:21.324694 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6mjg8\" (UniqueName: \"kubernetes.io/projected/01d70318-38f6-4dc0-acc4-36458ccf419c-kube-api-access-6mjg8\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:21 crc kubenswrapper[5107]: I0220 00:11:21.351220 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ct7wl" Feb 20 00:11:21 crc kubenswrapper[5107]: I0220 00:11:21.700213 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-wrrss_01d70318-38f6-4dc0-acc4-36458ccf419c/kube-multus-additional-cni-plugins/0.log" Feb 20 00:11:21 crc kubenswrapper[5107]: I0220 00:11:21.700391 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-wrrss" Feb 20 00:11:21 crc kubenswrapper[5107]: I0220 00:11:21.700462 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-wrrss" event={"ID":"01d70318-38f6-4dc0-acc4-36458ccf419c","Type":"ContainerDied","Data":"73281017a853c55f5598955d52483b356bdde4e56f3b1b202da5bc72b447b014"} Feb 20 00:11:21 crc kubenswrapper[5107]: I0220 00:11:21.700559 5107 scope.go:117] "RemoveContainer" containerID="b9e64a7933b7566bee3689cad101d004830ea2f6dc864c2ef830802020aef0cf" Feb 20 00:11:21 crc kubenswrapper[5107]: I0220 00:11:21.700806 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gxqz9" podUID="afb3e406-6312-4e7e-bcaa-f3c532a0c1ea" containerName="registry-server" containerID="cri-o://692427aaf99f853b5e81e55100a6ca9bc5a71be4ef10ab4b3772882f52dff4e9" gracePeriod=2 Feb 20 00:11:21 crc kubenswrapper[5107]: I0220 00:11:21.738450 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-wrrss"] Feb 20 00:11:21 crc kubenswrapper[5107]: I0220 00:11:21.741633 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-wrrss"] Feb 20 00:11:21 crc kubenswrapper[5107]: I0220 00:11:21.755703 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rc7nq" Feb 20 00:11:21 crc kubenswrapper[5107]: I0220 00:11:21.759871 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ct7wl" Feb 20 00:11:22 crc kubenswrapper[5107]: I0220 00:11:22.276881 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-4kmt8" Feb 20 00:11:22 crc kubenswrapper[5107]: I0220 00:11:22.276956 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4kmt8" Feb 20 00:11:22 crc kubenswrapper[5107]: I0220 00:11:22.346738 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4kmt8" Feb 20 00:11:22 crc kubenswrapper[5107]: I0220 00:11:22.495030 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01d70318-38f6-4dc0-acc4-36458ccf419c" path="/var/lib/kubelet/pods/01d70318-38f6-4dc0-acc4-36458ccf419c/volumes" Feb 20 00:11:22 crc kubenswrapper[5107]: I0220 00:11:22.771172 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4kmt8" Feb 20 00:11:22 crc kubenswrapper[5107]: I0220 00:11:22.883041 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z54dq"] Feb 20 00:11:22 crc kubenswrapper[5107]: I0220 00:11:22.883635 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z54dq" podUID="69b44045-c596-43d0-bf80-5e5c89671bef" containerName="registry-server" containerID="cri-o://89226f993387ddc2de766bc4a863990533c3d9bfe594e6f090aa7df201d71ab0" gracePeriod=2 Feb 20 00:11:23 crc kubenswrapper[5107]: I0220 00:11:23.039926 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-jvrxj" Feb 20 00:11:23 crc kubenswrapper[5107]: I0220 00:11:23.040019 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jvrxj" Feb 20 00:11:23 crc kubenswrapper[5107]: I0220 00:11:23.104471 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jvrxj" Feb 20 00:11:23 crc kubenswrapper[5107]: I0220 00:11:23.798642 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jvrxj" Feb 20 00:11:24 crc kubenswrapper[5107]: I0220 00:11:24.749739 5107 generic.go:358] "Generic (PLEG): container finished" podID="69b44045-c596-43d0-bf80-5e5c89671bef" containerID="89226f993387ddc2de766bc4a863990533c3d9bfe594e6f090aa7df201d71ab0" exitCode=0 Feb 20 00:11:24 crc kubenswrapper[5107]: I0220 00:11:24.749850 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z54dq" event={"ID":"69b44045-c596-43d0-bf80-5e5c89671bef","Type":"ContainerDied","Data":"89226f993387ddc2de766bc4a863990533c3d9bfe594e6f090aa7df201d71ab0"} Feb 20 00:11:24 crc kubenswrapper[5107]: I0220 00:11:24.756894 5107 generic.go:358] "Generic (PLEG): container finished" podID="afb3e406-6312-4e7e-bcaa-f3c532a0c1ea" containerID="692427aaf99f853b5e81e55100a6ca9bc5a71be4ef10ab4b3772882f52dff4e9" exitCode=0 Feb 20 00:11:24 crc kubenswrapper[5107]: I0220 00:11:24.756988 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxqz9" event={"ID":"afb3e406-6312-4e7e-bcaa-f3c532a0c1ea","Type":"ContainerDied","Data":"692427aaf99f853b5e81e55100a6ca9bc5a71be4ef10ab4b3772882f52dff4e9"} Feb 20 00:11:25 crc kubenswrapper[5107]: I0220 00:11:25.003360 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gxqz9" Feb 20 00:11:25 crc kubenswrapper[5107]: I0220 00:11:25.083657 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afb3e406-6312-4e7e-bcaa-f3c532a0c1ea-catalog-content\") pod \"afb3e406-6312-4e7e-bcaa-f3c532a0c1ea\" (UID: \"afb3e406-6312-4e7e-bcaa-f3c532a0c1ea\") " Feb 20 00:11:25 crc kubenswrapper[5107]: I0220 00:11:25.083706 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afb3e406-6312-4e7e-bcaa-f3c532a0c1ea-utilities\") pod \"afb3e406-6312-4e7e-bcaa-f3c532a0c1ea\" (UID: \"afb3e406-6312-4e7e-bcaa-f3c532a0c1ea\") " Feb 20 00:11:25 crc kubenswrapper[5107]: I0220 00:11:25.083781 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcgwg\" (UniqueName: \"kubernetes.io/projected/afb3e406-6312-4e7e-bcaa-f3c532a0c1ea-kube-api-access-jcgwg\") pod \"afb3e406-6312-4e7e-bcaa-f3c532a0c1ea\" (UID: \"afb3e406-6312-4e7e-bcaa-f3c532a0c1ea\") " Feb 20 00:11:25 crc kubenswrapper[5107]: I0220 00:11:25.085553 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afb3e406-6312-4e7e-bcaa-f3c532a0c1ea-utilities" (OuterVolumeSpecName: "utilities") pod "afb3e406-6312-4e7e-bcaa-f3c532a0c1ea" (UID: "afb3e406-6312-4e7e-bcaa-f3c532a0c1ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:25 crc kubenswrapper[5107]: I0220 00:11:25.088717 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afb3e406-6312-4e7e-bcaa-f3c532a0c1ea-kube-api-access-jcgwg" (OuterVolumeSpecName: "kube-api-access-jcgwg") pod "afb3e406-6312-4e7e-bcaa-f3c532a0c1ea" (UID: "afb3e406-6312-4e7e-bcaa-f3c532a0c1ea"). InnerVolumeSpecName "kube-api-access-jcgwg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:25 crc kubenswrapper[5107]: I0220 00:11:25.122023 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afb3e406-6312-4e7e-bcaa-f3c532a0c1ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "afb3e406-6312-4e7e-bcaa-f3c532a0c1ea" (UID: "afb3e406-6312-4e7e-bcaa-f3c532a0c1ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:25 crc kubenswrapper[5107]: I0220 00:11:25.185807 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afb3e406-6312-4e7e-bcaa-f3c532a0c1ea-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:25 crc kubenswrapper[5107]: I0220 00:11:25.185867 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afb3e406-6312-4e7e-bcaa-f3c532a0c1ea-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:25 crc kubenswrapper[5107]: I0220 00:11:25.185880 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jcgwg\" (UniqueName: \"kubernetes.io/projected/afb3e406-6312-4e7e-bcaa-f3c532a0c1ea-kube-api-access-jcgwg\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:25 crc kubenswrapper[5107]: I0220 00:11:25.280534 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ct7wl"] Feb 20 00:11:25 crc kubenswrapper[5107]: I0220 00:11:25.281011 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ct7wl" podUID="67484e0a-1c92-441c-9b19-892f98f62176" containerName="registry-server" containerID="cri-o://9fc32ffe82b6fae16de2f089d6987aadaf4da72ac2f700f80fecde4374689cf2" gracePeriod=2 Feb 20 00:11:25 crc kubenswrapper[5107]: I0220 00:11:25.670175 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:11:25 crc kubenswrapper[5107]: I0220 00:11:25.775181 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxqz9" event={"ID":"afb3e406-6312-4e7e-bcaa-f3c532a0c1ea","Type":"ContainerDied","Data":"14188e041540619f3da1e0c3e00ce913f9158bd6c697f6a95c6680c6fcc4d156"} Feb 20 00:11:25 crc kubenswrapper[5107]: I0220 00:11:25.775246 5107 scope.go:117] "RemoveContainer" containerID="692427aaf99f853b5e81e55100a6ca9bc5a71be4ef10ab4b3772882f52dff4e9" Feb 20 00:11:25 crc kubenswrapper[5107]: I0220 00:11:25.775431 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gxqz9" Feb 20 00:11:25 crc kubenswrapper[5107]: I0220 00:11:25.798218 5107 scope.go:117] "RemoveContainer" containerID="1a28e860ab72374160cfcd2ec6e035484d98de78329d7ee68b152feab78b9518" Feb 20 00:11:25 crc kubenswrapper[5107]: I0220 00:11:25.806616 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gxqz9"] Feb 20 00:11:25 crc kubenswrapper[5107]: I0220 00:11:25.818243 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gxqz9"] Feb 20 00:11:25 crc kubenswrapper[5107]: I0220 00:11:25.835985 5107 scope.go:117] "RemoveContainer" containerID="6a7d69079efec247a532784840f1b61d7989409655477f4905cb11a75f790e20" Feb 20 00:11:25 crc kubenswrapper[5107]: I0220 00:11:25.923296 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z54dq" Feb 20 00:11:26 crc kubenswrapper[5107]: I0220 00:11:26.097631 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69b44045-c596-43d0-bf80-5e5c89671bef-catalog-content\") pod \"69b44045-c596-43d0-bf80-5e5c89671bef\" (UID: \"69b44045-c596-43d0-bf80-5e5c89671bef\") " Feb 20 00:11:26 crc kubenswrapper[5107]: I0220 00:11:26.098217 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7497k\" (UniqueName: \"kubernetes.io/projected/69b44045-c596-43d0-bf80-5e5c89671bef-kube-api-access-7497k\") pod \"69b44045-c596-43d0-bf80-5e5c89671bef\" (UID: \"69b44045-c596-43d0-bf80-5e5c89671bef\") " Feb 20 00:11:26 crc kubenswrapper[5107]: I0220 00:11:26.098259 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69b44045-c596-43d0-bf80-5e5c89671bef-utilities\") pod \"69b44045-c596-43d0-bf80-5e5c89671bef\" (UID: \"69b44045-c596-43d0-bf80-5e5c89671bef\") " Feb 20 00:11:26 crc kubenswrapper[5107]: I0220 00:11:26.100632 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69b44045-c596-43d0-bf80-5e5c89671bef-utilities" (OuterVolumeSpecName: "utilities") pod "69b44045-c596-43d0-bf80-5e5c89671bef" (UID: "69b44045-c596-43d0-bf80-5e5c89671bef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:26 crc kubenswrapper[5107]: I0220 00:11:26.110686 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69b44045-c596-43d0-bf80-5e5c89671bef-kube-api-access-7497k" (OuterVolumeSpecName: "kube-api-access-7497k") pod "69b44045-c596-43d0-bf80-5e5c89671bef" (UID: "69b44045-c596-43d0-bf80-5e5c89671bef"). InnerVolumeSpecName "kube-api-access-7497k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:26 crc kubenswrapper[5107]: I0220 00:11:26.160802 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69b44045-c596-43d0-bf80-5e5c89671bef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69b44045-c596-43d0-bf80-5e5c89671bef" (UID: "69b44045-c596-43d0-bf80-5e5c89671bef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:26 crc kubenswrapper[5107]: I0220 00:11:26.200299 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69b44045-c596-43d0-bf80-5e5c89671bef-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:26 crc kubenswrapper[5107]: I0220 00:11:26.200367 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7497k\" (UniqueName: \"kubernetes.io/projected/69b44045-c596-43d0-bf80-5e5c89671bef-kube-api-access-7497k\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:26 crc kubenswrapper[5107]: I0220 00:11:26.200442 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69b44045-c596-43d0-bf80-5e5c89671bef-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:26 crc kubenswrapper[5107]: I0220 00:11:26.494415 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afb3e406-6312-4e7e-bcaa-f3c532a0c1ea" path="/var/lib/kubelet/pods/afb3e406-6312-4e7e-bcaa-f3c532a0c1ea/volumes" Feb 20 00:11:26 crc kubenswrapper[5107]: I0220 00:11:26.789592 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z54dq" event={"ID":"69b44045-c596-43d0-bf80-5e5c89671bef","Type":"ContainerDied","Data":"fb18d954cd9bc91cd579a3c8b788db3b2fb154e096a210d26620bbb124f81998"} Feb 20 00:11:26 crc kubenswrapper[5107]: I0220 00:11:26.789668 5107 scope.go:117] "RemoveContainer" containerID="89226f993387ddc2de766bc4a863990533c3d9bfe594e6f090aa7df201d71ab0" Feb 20 00:11:26 crc kubenswrapper[5107]: I0220 00:11:26.789917 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z54dq" Feb 20 00:11:26 crc kubenswrapper[5107]: I0220 00:11:26.798743 5107 generic.go:358] "Generic (PLEG): container finished" podID="67484e0a-1c92-441c-9b19-892f98f62176" containerID="9fc32ffe82b6fae16de2f089d6987aadaf4da72ac2f700f80fecde4374689cf2" exitCode=0 Feb 20 00:11:26 crc kubenswrapper[5107]: I0220 00:11:26.798845 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ct7wl" event={"ID":"67484e0a-1c92-441c-9b19-892f98f62176","Type":"ContainerDied","Data":"9fc32ffe82b6fae16de2f089d6987aadaf4da72ac2f700f80fecde4374689cf2"} Feb 20 00:11:26 crc kubenswrapper[5107]: I0220 00:11:26.863374 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z54dq"] Feb 20 00:11:26 crc kubenswrapper[5107]: I0220 00:11:26.866459 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z54dq"] Feb 20 00:11:26 crc kubenswrapper[5107]: I0220 00:11:26.869511 5107 scope.go:117] "RemoveContainer" containerID="433ff21ffd04ad886fdf049a414f5a28b86c12d4187ad152631bb77dd0226554" Feb 20 00:11:26 crc kubenswrapper[5107]: I0220 00:11:26.900089 5107 scope.go:117] "RemoveContainer" containerID="6a77266d76a61dac2085923eee998b63f39e68d10b34de0151c84d8fa7b62679" Feb 20 00:11:27 crc kubenswrapper[5107]: I0220 00:11:27.081200 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jvrxj"] Feb 20 00:11:27 crc kubenswrapper[5107]: I0220 00:11:27.081645 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jvrxj" podUID="58158355-aadb-4a44-8f4e-3e0c20d702e6" containerName="registry-server" containerID="cri-o://c62aacf712da536ac7dad93be8625a981f622a7e2acce6e8f57b50ef083e8c5d" gracePeriod=2 Feb 20 00:11:28 crc kubenswrapper[5107]: I0220 00:11:28.161206 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ct7wl" Feb 20 00:11:28 crc kubenswrapper[5107]: I0220 00:11:28.328630 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67484e0a-1c92-441c-9b19-892f98f62176-catalog-content\") pod \"67484e0a-1c92-441c-9b19-892f98f62176\" (UID: \"67484e0a-1c92-441c-9b19-892f98f62176\") " Feb 20 00:11:28 crc kubenswrapper[5107]: I0220 00:11:28.328762 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h89sl\" (UniqueName: \"kubernetes.io/projected/67484e0a-1c92-441c-9b19-892f98f62176-kube-api-access-h89sl\") pod \"67484e0a-1c92-441c-9b19-892f98f62176\" (UID: \"67484e0a-1c92-441c-9b19-892f98f62176\") " Feb 20 00:11:28 crc kubenswrapper[5107]: I0220 00:11:28.328807 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67484e0a-1c92-441c-9b19-892f98f62176-utilities\") pod \"67484e0a-1c92-441c-9b19-892f98f62176\" (UID: \"67484e0a-1c92-441c-9b19-892f98f62176\") " Feb 20 00:11:28 crc kubenswrapper[5107]: I0220 00:11:28.330039 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67484e0a-1c92-441c-9b19-892f98f62176-utilities" (OuterVolumeSpecName: "utilities") pod "67484e0a-1c92-441c-9b19-892f98f62176" (UID: "67484e0a-1c92-441c-9b19-892f98f62176"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:28 crc kubenswrapper[5107]: I0220 00:11:28.335247 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67484e0a-1c92-441c-9b19-892f98f62176-kube-api-access-h89sl" (OuterVolumeSpecName: "kube-api-access-h89sl") pod "67484e0a-1c92-441c-9b19-892f98f62176" (UID: "67484e0a-1c92-441c-9b19-892f98f62176"). InnerVolumeSpecName "kube-api-access-h89sl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:28 crc kubenswrapper[5107]: I0220 00:11:28.350493 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67484e0a-1c92-441c-9b19-892f98f62176-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67484e0a-1c92-441c-9b19-892f98f62176" (UID: "67484e0a-1c92-441c-9b19-892f98f62176"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:28 crc kubenswrapper[5107]: I0220 00:11:28.430606 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h89sl\" (UniqueName: \"kubernetes.io/projected/67484e0a-1c92-441c-9b19-892f98f62176-kube-api-access-h89sl\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:28 crc kubenswrapper[5107]: I0220 00:11:28.430647 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67484e0a-1c92-441c-9b19-892f98f62176-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:28 crc kubenswrapper[5107]: I0220 00:11:28.430661 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67484e0a-1c92-441c-9b19-892f98f62176-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:28 crc kubenswrapper[5107]: I0220 00:11:28.497760 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69b44045-c596-43d0-bf80-5e5c89671bef" path="/var/lib/kubelet/pods/69b44045-c596-43d0-bf80-5e5c89671bef/volumes" Feb 20 00:11:28 crc kubenswrapper[5107]: I0220 00:11:28.843042 5107 generic.go:358] "Generic (PLEG): container finished" podID="58158355-aadb-4a44-8f4e-3e0c20d702e6" containerID="c62aacf712da536ac7dad93be8625a981f622a7e2acce6e8f57b50ef083e8c5d" exitCode=0 Feb 20 00:11:28 crc kubenswrapper[5107]: I0220 00:11:28.843087 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvrxj" event={"ID":"58158355-aadb-4a44-8f4e-3e0c20d702e6","Type":"ContainerDied","Data":"c62aacf712da536ac7dad93be8625a981f622a7e2acce6e8f57b50ef083e8c5d"} Feb 20 00:11:28 crc kubenswrapper[5107]: I0220 00:11:28.847178 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ct7wl" event={"ID":"67484e0a-1c92-441c-9b19-892f98f62176","Type":"ContainerDied","Data":"368f888a94c749ec4b9f29eff7394e6631590bf738e81cf2de247a9b8ce65c82"} Feb 20 00:11:28 crc kubenswrapper[5107]: I0220 00:11:28.847214 5107 scope.go:117] "RemoveContainer" containerID="9fc32ffe82b6fae16de2f089d6987aadaf4da72ac2f700f80fecde4374689cf2" Feb 20 00:11:28 crc kubenswrapper[5107]: I0220 00:11:28.847266 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ct7wl" Feb 20 00:11:28 crc kubenswrapper[5107]: I0220 00:11:28.873551 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ct7wl"] Feb 20 00:11:28 crc kubenswrapper[5107]: I0220 00:11:28.878742 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ct7wl"] Feb 20 00:11:28 crc kubenswrapper[5107]: I0220 00:11:28.883566 5107 scope.go:117] "RemoveContainer" containerID="7d275b3ab9f0f5b68bb4309db2aa298171377798335761ad02c92eaba5a68436" Feb 20 00:11:28 crc kubenswrapper[5107]: I0220 00:11:28.933351 5107 scope.go:117] "RemoveContainer" containerID="0db42d086e2586904cb569c178ed1c9b447de19a76ec055919be07d76366ec9f" Feb 20 00:11:29 crc kubenswrapper[5107]: I0220 00:11:29.103971 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jvrxj" Feb 20 00:11:29 crc kubenswrapper[5107]: I0220 00:11:29.242340 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5qqm\" (UniqueName: \"kubernetes.io/projected/58158355-aadb-4a44-8f4e-3e0c20d702e6-kube-api-access-j5qqm\") pod \"58158355-aadb-4a44-8f4e-3e0c20d702e6\" (UID: \"58158355-aadb-4a44-8f4e-3e0c20d702e6\") " Feb 20 00:11:29 crc kubenswrapper[5107]: I0220 00:11:29.242477 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58158355-aadb-4a44-8f4e-3e0c20d702e6-utilities\") pod \"58158355-aadb-4a44-8f4e-3e0c20d702e6\" (UID: \"58158355-aadb-4a44-8f4e-3e0c20d702e6\") " Feb 20 00:11:29 crc kubenswrapper[5107]: I0220 00:11:29.242528 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58158355-aadb-4a44-8f4e-3e0c20d702e6-catalog-content\") pod \"58158355-aadb-4a44-8f4e-3e0c20d702e6\" (UID: \"58158355-aadb-4a44-8f4e-3e0c20d702e6\") " Feb 20 00:11:29 crc kubenswrapper[5107]: I0220 00:11:29.248027 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58158355-aadb-4a44-8f4e-3e0c20d702e6-utilities" (OuterVolumeSpecName: "utilities") pod "58158355-aadb-4a44-8f4e-3e0c20d702e6" (UID: "58158355-aadb-4a44-8f4e-3e0c20d702e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:29 crc kubenswrapper[5107]: I0220 00:11:29.254416 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58158355-aadb-4a44-8f4e-3e0c20d702e6-kube-api-access-j5qqm" (OuterVolumeSpecName: "kube-api-access-j5qqm") pod "58158355-aadb-4a44-8f4e-3e0c20d702e6" (UID: "58158355-aadb-4a44-8f4e-3e0c20d702e6"). InnerVolumeSpecName "kube-api-access-j5qqm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:29 crc kubenswrapper[5107]: I0220 00:11:29.345021 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58158355-aadb-4a44-8f4e-3e0c20d702e6-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:29 crc kubenswrapper[5107]: I0220 00:11:29.345061 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j5qqm\" (UniqueName: \"kubernetes.io/projected/58158355-aadb-4a44-8f4e-3e0c20d702e6-kube-api-access-j5qqm\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:29 crc kubenswrapper[5107]: I0220 00:11:29.414974 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58158355-aadb-4a44-8f4e-3e0c20d702e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58158355-aadb-4a44-8f4e-3e0c20d702e6" (UID: "58158355-aadb-4a44-8f4e-3e0c20d702e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:29 crc kubenswrapper[5107]: I0220 00:11:29.446428 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58158355-aadb-4a44-8f4e-3e0c20d702e6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:29 crc kubenswrapper[5107]: I0220 00:11:29.859769 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvrxj" event={"ID":"58158355-aadb-4a44-8f4e-3e0c20d702e6","Type":"ContainerDied","Data":"d1f10e6592006da36cf376970971b1e6732cd0dd5ce4066152bc17f4e8e476ca"} Feb 20 00:11:29 crc kubenswrapper[5107]: I0220 00:11:29.860058 5107 scope.go:117] "RemoveContainer" containerID="c62aacf712da536ac7dad93be8625a981f622a7e2acce6e8f57b50ef083e8c5d" Feb 20 00:11:29 crc kubenswrapper[5107]: I0220 00:11:29.859862 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jvrxj" Feb 20 00:11:29 crc kubenswrapper[5107]: I0220 00:11:29.880910 5107 scope.go:117] "RemoveContainer" containerID="60fc24dc144419deefadd9787069c5e9b791f93567a2b927eb1d1c555f8c56b8" Feb 20 00:11:29 crc kubenswrapper[5107]: I0220 00:11:29.908386 5107 scope.go:117] "RemoveContainer" containerID="186519ba74071db50302d9f0db88edcec6ccfb105f19c6f42aa8f6f7edf5a3af" Feb 20 00:11:29 crc kubenswrapper[5107]: I0220 00:11:29.931253 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jvrxj"] Feb 20 00:11:29 crc kubenswrapper[5107]: I0220 00:11:29.939220 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jvrxj"] Feb 20 00:11:30 crc kubenswrapper[5107]: I0220 00:11:30.497545 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58158355-aadb-4a44-8f4e-3e0c20d702e6" path="/var/lib/kubelet/pods/58158355-aadb-4a44-8f4e-3e0c20d702e6/volumes" Feb 20 00:11:30 crc kubenswrapper[5107]: I0220 00:11:30.498542 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67484e0a-1c92-441c-9b19-892f98f62176" path="/var/lib/kubelet/pods/67484e0a-1c92-441c-9b19-892f98f62176/volumes" Feb 20 00:11:35 crc kubenswrapper[5107]: I0220 00:11:35.665093 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-lp56s"] Feb 20 00:11:44 crc kubenswrapper[5107]: I0220 00:11:44.770900 5107 ???:1] "http: TLS handshake error from 192.168.126.11:33750: no serving certificate available for the kubelet" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.408829 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5894f84bfd-np8kz"] Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.409573 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5894f84bfd-np8kz" podUID="78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138" containerName="route-controller-manager" containerID="cri-o://2ec793c0b41108e2992b070edf3f5a7310e1ac6e9f8dd8e6309ec160246715d0" gracePeriod=30 Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.412247 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-58dd86879d-ntsvq"] Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.412541 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-58dd86879d-ntsvq" podUID="5fef25fa-89ed-4fb1-acff-af76da291a0c" containerName="controller-manager" containerID="cri-o://149fad8151c9afb5d0de36bdfaec7d102dd604df90497afe05e6d70d09aaca33" gracePeriod=30 Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.945124 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5894f84bfd-np8kz" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.946678 5107 generic.go:358] "Generic (PLEG): container finished" podID="78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138" containerID="2ec793c0b41108e2992b070edf3f5a7310e1ac6e9f8dd8e6309ec160246715d0" exitCode=0 Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.946756 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5894f84bfd-np8kz" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.946771 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5894f84bfd-np8kz" event={"ID":"78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138","Type":"ContainerDied","Data":"2ec793c0b41108e2992b070edf3f5a7310e1ac6e9f8dd8e6309ec160246715d0"} Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.946823 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5894f84bfd-np8kz" event={"ID":"78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138","Type":"ContainerDied","Data":"5ff274e90d4e227f62454a3b42241614ae95e65a3e30ca5c47dece1095f9a49a"} Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.946847 5107 scope.go:117] "RemoveContainer" containerID="2ec793c0b41108e2992b070edf3f5a7310e1ac6e9f8dd8e6309ec160246715d0" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.948393 5107 generic.go:358] "Generic (PLEG): container finished" podID="5fef25fa-89ed-4fb1-acff-af76da291a0c" containerID="149fad8151c9afb5d0de36bdfaec7d102dd604df90497afe05e6d70d09aaca33" exitCode=0 Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.948496 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58dd86879d-ntsvq" event={"ID":"5fef25fa-89ed-4fb1-acff-af76da291a0c","Type":"ContainerDied","Data":"149fad8151c9afb5d0de36bdfaec7d102dd604df90497afe05e6d70d09aaca33"} Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.965804 5107 scope.go:117] "RemoveContainer" containerID="2ec793c0b41108e2992b070edf3f5a7310e1ac6e9f8dd8e6309ec160246715d0" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.973483 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7df4547dbc-fmvgf"] Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.973972 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="afb3e406-6312-4e7e-bcaa-f3c532a0c1ea" containerName="registry-server" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.973988 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="afb3e406-6312-4e7e-bcaa-f3c532a0c1ea" containerName="registry-server" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.973998 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="69b44045-c596-43d0-bf80-5e5c89671bef" containerName="extract-utilities" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.974004 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="69b44045-c596-43d0-bf80-5e5c89671bef" containerName="extract-utilities" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.974012 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138" containerName="route-controller-manager" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.974018 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138" containerName="route-controller-manager" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.974026 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="58158355-aadb-4a44-8f4e-3e0c20d702e6" containerName="extract-content" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.974033 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="58158355-aadb-4a44-8f4e-3e0c20d702e6" containerName="extract-content" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.974048 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="67484e0a-1c92-441c-9b19-892f98f62176" containerName="extract-utilities" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.974053 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="67484e0a-1c92-441c-9b19-892f98f62176" containerName="extract-utilities" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.974059 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="01d70318-38f6-4dc0-acc4-36458ccf419c" containerName="kube-multus-additional-cni-plugins" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.974065 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="01d70318-38f6-4dc0-acc4-36458ccf419c" containerName="kube-multus-additional-cni-plugins" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.974077 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="58158355-aadb-4a44-8f4e-3e0c20d702e6" containerName="extract-utilities" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.974083 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="58158355-aadb-4a44-8f4e-3e0c20d702e6" containerName="extract-utilities" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.974091 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="69b44045-c596-43d0-bf80-5e5c89671bef" containerName="extract-content" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.974096 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="69b44045-c596-43d0-bf80-5e5c89671bef" containerName="extract-content" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.974104 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="afb3e406-6312-4e7e-bcaa-f3c532a0c1ea" containerName="extract-utilities" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.974109 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="afb3e406-6312-4e7e-bcaa-f3c532a0c1ea" containerName="extract-utilities" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.974116 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9c5bf76b-3e95-425c-911a-5078511bd0f3" containerName="pruner" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.974121 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c5bf76b-3e95-425c-911a-5078511bd0f3" containerName="pruner" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.974128 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="67484e0a-1c92-441c-9b19-892f98f62176" containerName="extract-content" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.974133 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="67484e0a-1c92-441c-9b19-892f98f62176" containerName="extract-content" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.974158 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="afb3e406-6312-4e7e-bcaa-f3c532a0c1ea" containerName="extract-content" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.974164 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="afb3e406-6312-4e7e-bcaa-f3c532a0c1ea" containerName="extract-content" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.974172 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="69b44045-c596-43d0-bf80-5e5c89671bef" containerName="registry-server" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.974177 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="69b44045-c596-43d0-bf80-5e5c89671bef" containerName="registry-server" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.974186 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="67484e0a-1c92-441c-9b19-892f98f62176" containerName="registry-server" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.974191 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="67484e0a-1c92-441c-9b19-892f98f62176" containerName="registry-server" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.974201 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="58158355-aadb-4a44-8f4e-3e0c20d702e6" containerName="registry-server" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.974206 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="58158355-aadb-4a44-8f4e-3e0c20d702e6" containerName="registry-server" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.974281 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="67484e0a-1c92-441c-9b19-892f98f62176" containerName="registry-server" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.974291 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138" containerName="route-controller-manager" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.974299 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="58158355-aadb-4a44-8f4e-3e0c20d702e6" containerName="registry-server" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.974307 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="afb3e406-6312-4e7e-bcaa-f3c532a0c1ea" containerName="registry-server" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.974315 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="9c5bf76b-3e95-425c-911a-5078511bd0f3" containerName="pruner" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.974321 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="01d70318-38f6-4dc0-acc4-36458ccf419c" containerName="kube-multus-additional-cni-plugins" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.974328 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="69b44045-c596-43d0-bf80-5e5c89671bef" containerName="registry-server" Feb 20 00:11:46 crc kubenswrapper[5107]: E0220 00:11:46.975239 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ec793c0b41108e2992b070edf3f5a7310e1ac6e9f8dd8e6309ec160246715d0\": container with ID starting with 2ec793c0b41108e2992b070edf3f5a7310e1ac6e9f8dd8e6309ec160246715d0 not found: ID does not exist" containerID="2ec793c0b41108e2992b070edf3f5a7310e1ac6e9f8dd8e6309ec160246715d0" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.975269 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ec793c0b41108e2992b070edf3f5a7310e1ac6e9f8dd8e6309ec160246715d0"} err="failed to get container status \"2ec793c0b41108e2992b070edf3f5a7310e1ac6e9f8dd8e6309ec160246715d0\": rpc error: code = NotFound desc = could not find container \"2ec793c0b41108e2992b070edf3f5a7310e1ac6e9f8dd8e6309ec160246715d0\": container with ID starting with 2ec793c0b41108e2992b070edf3f5a7310e1ac6e9f8dd8e6309ec160246715d0 not found: ID does not exist" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.981522 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7df4547dbc-fmvgf" Feb 20 00:11:46 crc kubenswrapper[5107]: I0220 00:11:46.985511 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7df4547dbc-fmvgf"] Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.031761 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/12c41778-313a-479a-8b45-5e0e9abef1cb-tmp\") pod \"route-controller-manager-7df4547dbc-fmvgf\" (UID: \"12c41778-313a-479a-8b45-5e0e9abef1cb\") " pod="openshift-route-controller-manager/route-controller-manager-7df4547dbc-fmvgf" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.031813 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12c41778-313a-479a-8b45-5e0e9abef1cb-client-ca\") pod \"route-controller-manager-7df4547dbc-fmvgf\" (UID: \"12c41778-313a-479a-8b45-5e0e9abef1cb\") " pod="openshift-route-controller-manager/route-controller-manager-7df4547dbc-fmvgf" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.031857 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12c41778-313a-479a-8b45-5e0e9abef1cb-serving-cert\") pod \"route-controller-manager-7df4547dbc-fmvgf\" (UID: \"12c41778-313a-479a-8b45-5e0e9abef1cb\") " pod="openshift-route-controller-manager/route-controller-manager-7df4547dbc-fmvgf" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.031881 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcmfd\" (UniqueName: \"kubernetes.io/projected/12c41778-313a-479a-8b45-5e0e9abef1cb-kube-api-access-wcmfd\") pod \"route-controller-manager-7df4547dbc-fmvgf\" (UID: \"12c41778-313a-479a-8b45-5e0e9abef1cb\") " pod="openshift-route-controller-manager/route-controller-manager-7df4547dbc-fmvgf" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.031909 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12c41778-313a-479a-8b45-5e0e9abef1cb-config\") pod \"route-controller-manager-7df4547dbc-fmvgf\" (UID: \"12c41778-313a-479a-8b45-5e0e9abef1cb\") " pod="openshift-route-controller-manager/route-controller-manager-7df4547dbc-fmvgf" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.119958 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58dd86879d-ntsvq" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.133004 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138-tmp\") pod \"78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138\" (UID: \"78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138\") " Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.133065 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5fef25fa-89ed-4fb1-acff-af76da291a0c-proxy-ca-bundles\") pod \"5fef25fa-89ed-4fb1-acff-af76da291a0c\" (UID: \"5fef25fa-89ed-4fb1-acff-af76da291a0c\") " Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.133104 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138-client-ca\") pod \"78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138\" (UID: \"78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138\") " Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.133133 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fef25fa-89ed-4fb1-acff-af76da291a0c-config\") pod \"5fef25fa-89ed-4fb1-acff-af76da291a0c\" (UID: \"5fef25fa-89ed-4fb1-acff-af76da291a0c\") " Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.133184 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138-config\") pod \"78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138\" (UID: \"78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138\") " Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.133209 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fef25fa-89ed-4fb1-acff-af76da291a0c-serving-cert\") pod \"5fef25fa-89ed-4fb1-acff-af76da291a0c\" (UID: \"5fef25fa-89ed-4fb1-acff-af76da291a0c\") " Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.133240 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6v7k\" (UniqueName: \"kubernetes.io/projected/78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138-kube-api-access-x6v7k\") pod \"78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138\" (UID: \"78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138\") " Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.133285 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138-serving-cert\") pod \"78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138\" (UID: \"78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138\") " Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.133317 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5fef25fa-89ed-4fb1-acff-af76da291a0c-tmp\") pod \"5fef25fa-89ed-4fb1-acff-af76da291a0c\" (UID: \"5fef25fa-89ed-4fb1-acff-af76da291a0c\") " Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.133340 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5fef25fa-89ed-4fb1-acff-af76da291a0c-client-ca\") pod \"5fef25fa-89ed-4fb1-acff-af76da291a0c\" (UID: \"5fef25fa-89ed-4fb1-acff-af76da291a0c\") " Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.133414 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfwb5\" (UniqueName: \"kubernetes.io/projected/5fef25fa-89ed-4fb1-acff-af76da291a0c-kube-api-access-cfwb5\") pod \"5fef25fa-89ed-4fb1-acff-af76da291a0c\" (UID: \"5fef25fa-89ed-4fb1-acff-af76da291a0c\") " Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.133558 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/12c41778-313a-479a-8b45-5e0e9abef1cb-tmp\") pod \"route-controller-manager-7df4547dbc-fmvgf\" (UID: \"12c41778-313a-479a-8b45-5e0e9abef1cb\") " pod="openshift-route-controller-manager/route-controller-manager-7df4547dbc-fmvgf" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.133587 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12c41778-313a-479a-8b45-5e0e9abef1cb-client-ca\") pod \"route-controller-manager-7df4547dbc-fmvgf\" (UID: \"12c41778-313a-479a-8b45-5e0e9abef1cb\") " pod="openshift-route-controller-manager/route-controller-manager-7df4547dbc-fmvgf" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.133639 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12c41778-313a-479a-8b45-5e0e9abef1cb-serving-cert\") pod \"route-controller-manager-7df4547dbc-fmvgf\" (UID: \"12c41778-313a-479a-8b45-5e0e9abef1cb\") " pod="openshift-route-controller-manager/route-controller-manager-7df4547dbc-fmvgf" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.133689 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wcmfd\" (UniqueName: \"kubernetes.io/projected/12c41778-313a-479a-8b45-5e0e9abef1cb-kube-api-access-wcmfd\") pod \"route-controller-manager-7df4547dbc-fmvgf\" (UID: \"12c41778-313a-479a-8b45-5e0e9abef1cb\") " pod="openshift-route-controller-manager/route-controller-manager-7df4547dbc-fmvgf" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.133752 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12c41778-313a-479a-8b45-5e0e9abef1cb-config\") pod \"route-controller-manager-7df4547dbc-fmvgf\" (UID: \"12c41778-313a-479a-8b45-5e0e9abef1cb\") " pod="openshift-route-controller-manager/route-controller-manager-7df4547dbc-fmvgf" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.134986 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12c41778-313a-479a-8b45-5e0e9abef1cb-config\") pod \"route-controller-manager-7df4547dbc-fmvgf\" (UID: \"12c41778-313a-479a-8b45-5e0e9abef1cb\") " pod="openshift-route-controller-manager/route-controller-manager-7df4547dbc-fmvgf" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.135436 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138-tmp" (OuterVolumeSpecName: "tmp") pod "78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138" (UID: "78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.135966 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fef25fa-89ed-4fb1-acff-af76da291a0c-tmp" (OuterVolumeSpecName: "tmp") pod "5fef25fa-89ed-4fb1-acff-af76da291a0c" (UID: "5fef25fa-89ed-4fb1-acff-af76da291a0c"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.136130 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fef25fa-89ed-4fb1-acff-af76da291a0c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5fef25fa-89ed-4fb1-acff-af76da291a0c" (UID: "5fef25fa-89ed-4fb1-acff-af76da291a0c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.136301 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138-client-ca" (OuterVolumeSpecName: "client-ca") pod "78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138" (UID: "78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.136451 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fef25fa-89ed-4fb1-acff-af76da291a0c-config" (OuterVolumeSpecName: "config") pod "5fef25fa-89ed-4fb1-acff-af76da291a0c" (UID: "5fef25fa-89ed-4fb1-acff-af76da291a0c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.136590 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/12c41778-313a-479a-8b45-5e0e9abef1cb-tmp\") pod \"route-controller-manager-7df4547dbc-fmvgf\" (UID: \"12c41778-313a-479a-8b45-5e0e9abef1cb\") " pod="openshift-route-controller-manager/route-controller-manager-7df4547dbc-fmvgf" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.136875 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138-config" (OuterVolumeSpecName: "config") pod "78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138" (UID: "78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.136893 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fef25fa-89ed-4fb1-acff-af76da291a0c-client-ca" (OuterVolumeSpecName: "client-ca") pod "5fef25fa-89ed-4fb1-acff-af76da291a0c" (UID: "5fef25fa-89ed-4fb1-acff-af76da291a0c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.136885 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12c41778-313a-479a-8b45-5e0e9abef1cb-client-ca\") pod \"route-controller-manager-7df4547dbc-fmvgf\" (UID: \"12c41778-313a-479a-8b45-5e0e9abef1cb\") " pod="openshift-route-controller-manager/route-controller-manager-7df4547dbc-fmvgf" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.144881 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6588c86656-wbjc9"] Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.145581 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12c41778-313a-479a-8b45-5e0e9abef1cb-serving-cert\") pod \"route-controller-manager-7df4547dbc-fmvgf\" (UID: \"12c41778-313a-479a-8b45-5e0e9abef1cb\") " pod="openshift-route-controller-manager/route-controller-manager-7df4547dbc-fmvgf" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.148617 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fef25fa-89ed-4fb1-acff-af76da291a0c-kube-api-access-cfwb5" (OuterVolumeSpecName: "kube-api-access-cfwb5") pod "5fef25fa-89ed-4fb1-acff-af76da291a0c" (UID: "5fef25fa-89ed-4fb1-acff-af76da291a0c"). InnerVolumeSpecName "kube-api-access-cfwb5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.149986 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fef25fa-89ed-4fb1-acff-af76da291a0c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5fef25fa-89ed-4fb1-acff-af76da291a0c" (UID: "5fef25fa-89ed-4fb1-acff-af76da291a0c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.151373 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138-kube-api-access-x6v7k" (OuterVolumeSpecName: "kube-api-access-x6v7k") pod "78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138" (UID: "78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138"). InnerVolumeSpecName "kube-api-access-x6v7k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.151484 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138" (UID: "78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.151722 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5fef25fa-89ed-4fb1-acff-af76da291a0c" containerName="controller-manager" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.151745 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fef25fa-89ed-4fb1-acff-af76da291a0c" containerName="controller-manager" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.151885 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="5fef25fa-89ed-4fb1-acff-af76da291a0c" containerName="controller-manager" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.154847 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcmfd\" (UniqueName: \"kubernetes.io/projected/12c41778-313a-479a-8b45-5e0e9abef1cb-kube-api-access-wcmfd\") pod \"route-controller-manager-7df4547dbc-fmvgf\" (UID: \"12c41778-313a-479a-8b45-5e0e9abef1cb\") " pod="openshift-route-controller-manager/route-controller-manager-7df4547dbc-fmvgf" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.163372 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6588c86656-wbjc9"] Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.163544 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.234232 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/330407c1-176d-45cc-a46a-35809411c33c-client-ca\") pod \"controller-manager-6588c86656-wbjc9\" (UID: \"330407c1-176d-45cc-a46a-35809411c33c\") " pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.234302 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/330407c1-176d-45cc-a46a-35809411c33c-config\") pod \"controller-manager-6588c86656-wbjc9\" (UID: \"330407c1-176d-45cc-a46a-35809411c33c\") " pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.234407 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/330407c1-176d-45cc-a46a-35809411c33c-proxy-ca-bundles\") pod \"controller-manager-6588c86656-wbjc9\" (UID: \"330407c1-176d-45cc-a46a-35809411c33c\") " pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.234542 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/330407c1-176d-45cc-a46a-35809411c33c-serving-cert\") pod \"controller-manager-6588c86656-wbjc9\" (UID: \"330407c1-176d-45cc-a46a-35809411c33c\") " pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.234576 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2phd\" (UniqueName: \"kubernetes.io/projected/330407c1-176d-45cc-a46a-35809411c33c-kube-api-access-c2phd\") pod \"controller-manager-6588c86656-wbjc9\" (UID: \"330407c1-176d-45cc-a46a-35809411c33c\") " pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.234671 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/330407c1-176d-45cc-a46a-35809411c33c-tmp\") pod \"controller-manager-6588c86656-wbjc9\" (UID: \"330407c1-176d-45cc-a46a-35809411c33c\") " pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.234793 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cfwb5\" (UniqueName: \"kubernetes.io/projected/5fef25fa-89ed-4fb1-acff-af76da291a0c-kube-api-access-cfwb5\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.234817 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138-tmp\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.234832 5107 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5fef25fa-89ed-4fb1-acff-af76da291a0c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.234843 5107 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138-client-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.234855 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fef25fa-89ed-4fb1-acff-af76da291a0c-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.234866 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.234878 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fef25fa-89ed-4fb1-acff-af76da291a0c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.234889 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x6v7k\" (UniqueName: \"kubernetes.io/projected/78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138-kube-api-access-x6v7k\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.234898 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.234906 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5fef25fa-89ed-4fb1-acff-af76da291a0c-tmp\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.234916 5107 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5fef25fa-89ed-4fb1-acff-af76da291a0c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.278162 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5894f84bfd-np8kz"] Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.280324 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5894f84bfd-np8kz"] Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.296562 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7df4547dbc-fmvgf" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.336091 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/330407c1-176d-45cc-a46a-35809411c33c-serving-cert\") pod \"controller-manager-6588c86656-wbjc9\" (UID: \"330407c1-176d-45cc-a46a-35809411c33c\") " pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.336157 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c2phd\" (UniqueName: \"kubernetes.io/projected/330407c1-176d-45cc-a46a-35809411c33c-kube-api-access-c2phd\") pod \"controller-manager-6588c86656-wbjc9\" (UID: \"330407c1-176d-45cc-a46a-35809411c33c\") " pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.336185 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/330407c1-176d-45cc-a46a-35809411c33c-tmp\") pod \"controller-manager-6588c86656-wbjc9\" (UID: \"330407c1-176d-45cc-a46a-35809411c33c\") " pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.336216 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/330407c1-176d-45cc-a46a-35809411c33c-client-ca\") pod \"controller-manager-6588c86656-wbjc9\" (UID: \"330407c1-176d-45cc-a46a-35809411c33c\") " pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.336299 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/330407c1-176d-45cc-a46a-35809411c33c-config\") pod \"controller-manager-6588c86656-wbjc9\" (UID: \"330407c1-176d-45cc-a46a-35809411c33c\") " pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.336345 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/330407c1-176d-45cc-a46a-35809411c33c-proxy-ca-bundles\") pod \"controller-manager-6588c86656-wbjc9\" (UID: \"330407c1-176d-45cc-a46a-35809411c33c\") " pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.337159 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/330407c1-176d-45cc-a46a-35809411c33c-client-ca\") pod \"controller-manager-6588c86656-wbjc9\" (UID: \"330407c1-176d-45cc-a46a-35809411c33c\") " pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.337412 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/330407c1-176d-45cc-a46a-35809411c33c-tmp\") pod \"controller-manager-6588c86656-wbjc9\" (UID: \"330407c1-176d-45cc-a46a-35809411c33c\") " pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.337466 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/330407c1-176d-45cc-a46a-35809411c33c-proxy-ca-bundles\") pod \"controller-manager-6588c86656-wbjc9\" (UID: \"330407c1-176d-45cc-a46a-35809411c33c\") " pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.338017 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/330407c1-176d-45cc-a46a-35809411c33c-config\") pod \"controller-manager-6588c86656-wbjc9\" (UID: \"330407c1-176d-45cc-a46a-35809411c33c\") " pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.339688 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/330407c1-176d-45cc-a46a-35809411c33c-serving-cert\") pod \"controller-manager-6588c86656-wbjc9\" (UID: \"330407c1-176d-45cc-a46a-35809411c33c\") " pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.355462 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2phd\" (UniqueName: \"kubernetes.io/projected/330407c1-176d-45cc-a46a-35809411c33c-kube-api-access-c2phd\") pod \"controller-manager-6588c86656-wbjc9\" (UID: \"330407c1-176d-45cc-a46a-35809411c33c\") " pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.484115 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.691939 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7df4547dbc-fmvgf"] Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.709518 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6588c86656-wbjc9"] Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.966520 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" event={"ID":"330407c1-176d-45cc-a46a-35809411c33c","Type":"ContainerStarted","Data":"b7a7abac181a83bb3b63a00d3fc88e813e409f56e207afdb5fb95fdd5843c74f"} Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.966579 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" event={"ID":"330407c1-176d-45cc-a46a-35809411c33c","Type":"ContainerStarted","Data":"d3568d743a64a4427e48403ea403c7160bd3626ef0445a9d57bce02227a2aa0d"} Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.968695 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.970687 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7df4547dbc-fmvgf" event={"ID":"12c41778-313a-479a-8b45-5e0e9abef1cb","Type":"ContainerStarted","Data":"00d82e708f4415a0c664a6413a2e273080a254576374e1d557feba7c5845d41f"} Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.970730 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7df4547dbc-fmvgf" event={"ID":"12c41778-313a-479a-8b45-5e0e9abef1cb","Type":"ContainerStarted","Data":"668b34b1f3f267de4abffde906a506a37e379c25fb1a6525a956d9f4808efbad"} Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.971499 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-7df4547dbc-fmvgf" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.976872 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58dd86879d-ntsvq" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.976943 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58dd86879d-ntsvq" event={"ID":"5fef25fa-89ed-4fb1-acff-af76da291a0c","Type":"ContainerDied","Data":"ddc90a6a8ee235501e143bf75b2a63329438f08473a88bd869a2ed087aef3e2a"} Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.976981 5107 scope.go:117] "RemoveContainer" containerID="149fad8151c9afb5d0de36bdfaec7d102dd604df90497afe05e6d70d09aaca33" Feb 20 00:11:47 crc kubenswrapper[5107]: I0220 00:11:47.986421 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" podStartSLOduration=1.9864046869999998 podStartE2EDuration="1.986404687s" podCreationTimestamp="2026-02-20 00:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:11:47.986252382 +0000 UTC m=+194.354909948" watchObservedRunningTime="2026-02-20 00:11:47.986404687 +0000 UTC m=+194.355062253" Feb 20 00:11:48 crc kubenswrapper[5107]: I0220 00:11:48.002708 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7df4547dbc-fmvgf" podStartSLOduration=2.002693787 podStartE2EDuration="2.002693787s" podCreationTimestamp="2026-02-20 00:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:11:47.999481812 +0000 UTC m=+194.368139398" watchObservedRunningTime="2026-02-20 00:11:48.002693787 +0000 UTC m=+194.371351353" Feb 20 00:11:48 crc kubenswrapper[5107]: I0220 00:11:48.017747 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-58dd86879d-ntsvq"] Feb 20 00:11:48 crc kubenswrapper[5107]: I0220 00:11:48.020952 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-58dd86879d-ntsvq"] Feb 20 00:11:48 crc kubenswrapper[5107]: I0220 00:11:48.341762 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" Feb 20 00:11:48 crc kubenswrapper[5107]: I0220 00:11:48.495712 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fef25fa-89ed-4fb1-acff-af76da291a0c" path="/var/lib/kubelet/pods/5fef25fa-89ed-4fb1-acff-af76da291a0c/volumes" Feb 20 00:11:48 crc kubenswrapper[5107]: I0220 00:11:48.496632 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138" path="/var/lib/kubelet/pods/78740b92-9ad5-4d2b-bc3c-aa1b7fc3a138/volumes" Feb 20 00:11:48 crc kubenswrapper[5107]: I0220 00:11:48.571680 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7df4547dbc-fmvgf" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.448548 5107 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.467183 5107 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.467238 5107 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.467542 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.467787 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.467806 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.467814 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.467821 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.467829 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.467834 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.467866 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.467873 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.467891 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.467897 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.467904 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.467909 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.467916 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.467922 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.467930 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.467936 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.467944 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.467949 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.468057 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://7a92dad77499ef534f5d281fd3c3fc2e3d25902901a8f84b5a0a2ea296d9a91c" gracePeriod=15 Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.468494 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://0860fb1002872b9679a5f53ddfb7bb7ea4bf0848b9a92fb159e28ca15ca7637a" gracePeriod=15 Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.468543 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://750bdb9c855b0e7e5aab93bb7f41b27a004c22584c4a40c4bfa03679ec1a2533" gracePeriod=15 Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.468578 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://b2c0ff440f5c8cffa2641d2bbd39fe8e52017724608d93d4b248240be6a0af24" gracePeriod=15 Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.469135 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.469171 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.469182 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.469194 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.469201 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.469209 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.469222 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.469403 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.469415 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.469525 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.469534 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.469609 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://21cc3154b74aece1eed247b85c80f9f2c7d77280e04711ce90bc778752a8738a" gracePeriod=15 Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.476597 5107 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3a14caf222afb62aaabdc47808b6f944" podUID="57755cc5f99000cc11e193051474d4e2" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.506043 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.506409 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.506444 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.506455 5107 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.506463 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.506485 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.506580 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.506695 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.506758 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.506806 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.506831 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.607847 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.607909 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.607964 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.607967 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.607994 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.608027 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.608046 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.608068 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.608092 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.608094 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.608125 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.608130 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.608167 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.608218 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.608234 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.608258 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.608270 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.608283 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.608576 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:11:52 crc kubenswrapper[5107]: I0220 00:11:52.608609 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:11:53 crc kubenswrapper[5107]: I0220 00:11:53.014592 5107 generic.go:358] "Generic (PLEG): container finished" podID="a285d85f-6695-407b-aed4-2050b9a32b34" containerID="061642bbba5010e3268842502df49978596b582532a514d0c2ee5ce7d2b0026e" exitCode=0 Feb 20 00:11:53 crc kubenswrapper[5107]: I0220 00:11:53.014744 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"a285d85f-6695-407b-aed4-2050b9a32b34","Type":"ContainerDied","Data":"061642bbba5010e3268842502df49978596b582532a514d0c2ee5ce7d2b0026e"} Feb 20 00:11:53 crc kubenswrapper[5107]: I0220 00:11:53.015764 5107 status_manager.go:895] "Failed to get status for pod" podUID="a285d85f-6695-407b-aed4-2050b9a32b34" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:11:53 crc kubenswrapper[5107]: I0220 00:11:53.021929 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 20 00:11:53 crc kubenswrapper[5107]: I0220 00:11:53.023970 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 20 00:11:53 crc kubenswrapper[5107]: I0220 00:11:53.024920 5107 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="0860fb1002872b9679a5f53ddfb7bb7ea4bf0848b9a92fb159e28ca15ca7637a" exitCode=0 Feb 20 00:11:53 crc kubenswrapper[5107]: I0220 00:11:53.024955 5107 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="750bdb9c855b0e7e5aab93bb7f41b27a004c22584c4a40c4bfa03679ec1a2533" exitCode=0 Feb 20 00:11:53 crc kubenswrapper[5107]: I0220 00:11:53.024969 5107 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="b2c0ff440f5c8cffa2641d2bbd39fe8e52017724608d93d4b248240be6a0af24" exitCode=0 Feb 20 00:11:53 crc kubenswrapper[5107]: I0220 00:11:53.024981 5107 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="21cc3154b74aece1eed247b85c80f9f2c7d77280e04711ce90bc778752a8738a" exitCode=2 Feb 20 00:11:53 crc kubenswrapper[5107]: I0220 00:11:53.025233 5107 scope.go:117] "RemoveContainer" containerID="dedae9d10992c0717bf9a6a55742b97566a7e6ea9660a223cd9df127ca3dc627" Feb 20 00:11:53 crc kubenswrapper[5107]: I0220 00:11:53.028234 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-6588c86656-wbjc9_330407c1-176d-45cc-a46a-35809411c33c/controller-manager/0.log" Feb 20 00:11:53 crc kubenswrapper[5107]: I0220 00:11:53.028277 5107 generic.go:358] "Generic (PLEG): container finished" podID="330407c1-176d-45cc-a46a-35809411c33c" containerID="b7a7abac181a83bb3b63a00d3fc88e813e409f56e207afdb5fb95fdd5843c74f" exitCode=255 Feb 20 00:11:53 crc kubenswrapper[5107]: I0220 00:11:53.028379 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" event={"ID":"330407c1-176d-45cc-a46a-35809411c33c","Type":"ContainerDied","Data":"b7a7abac181a83bb3b63a00d3fc88e813e409f56e207afdb5fb95fdd5843c74f"} Feb 20 00:11:53 crc kubenswrapper[5107]: I0220 00:11:53.028958 5107 scope.go:117] "RemoveContainer" containerID="b7a7abac181a83bb3b63a00d3fc88e813e409f56e207afdb5fb95fdd5843c74f" Feb 20 00:11:53 crc kubenswrapper[5107]: I0220 00:11:53.030562 5107 status_manager.go:895] "Failed to get status for pod" podUID="330407c1-176d-45cc-a46a-35809411c33c" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6588c86656-wbjc9\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:11:53 crc kubenswrapper[5107]: I0220 00:11:53.031119 5107 status_manager.go:895] "Failed to get status for pod" podUID="a285d85f-6695-407b-aed4-2050b9a32b34" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:11:53 crc kubenswrapper[5107]: E0220 00:11:53.078523 5107 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/events/controller-manager-6588c86656-wbjc9.1895cbfb17429b47\": dial tcp 38.102.83.180:6443: connect: connection refused" event="&Event{ObjectMeta:{controller-manager-6588c86656-wbjc9.1895cbfb17429b47 openshift-controller-manager 39050 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager,Name:controller-manager-6588c86656-wbjc9,UID:330407c1-176d-45cc-a46a-35809411c33c,APIVersion:v1,ResourceVersion:39036,FieldPath:spec.containers{controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:11:47 +0000 UTC,LastTimestamp:2026-02-20 00:11:53.076891186 +0000 UTC m=+199.445548752,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:11:54 crc kubenswrapper[5107]: I0220 00:11:54.041377 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 20 00:11:54 crc kubenswrapper[5107]: I0220 00:11:54.045964 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-6588c86656-wbjc9_330407c1-176d-45cc-a46a-35809411c33c/controller-manager/0.log" Feb 20 00:11:54 crc kubenswrapper[5107]: I0220 00:11:54.046170 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" event={"ID":"330407c1-176d-45cc-a46a-35809411c33c","Type":"ContainerStarted","Data":"7ca243e4fea32ec8130c6fc67f325c30d67ecc17956b590d458cf0fe2fec50c5"} Feb 20 00:11:54 crc kubenswrapper[5107]: I0220 00:11:54.046837 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" Feb 20 00:11:54 crc kubenswrapper[5107]: I0220 00:11:54.047253 5107 status_manager.go:895] "Failed to get status for pod" podUID="330407c1-176d-45cc-a46a-35809411c33c" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6588c86656-wbjc9\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:11:54 crc kubenswrapper[5107]: I0220 00:11:54.047763 5107 status_manager.go:895] "Failed to get status for pod" podUID="a285d85f-6695-407b-aed4-2050b9a32b34" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:11:54 crc kubenswrapper[5107]: I0220 00:11:54.416260 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 20 00:11:54 crc kubenswrapper[5107]: I0220 00:11:54.416966 5107 status_manager.go:895] "Failed to get status for pod" podUID="330407c1-176d-45cc-a46a-35809411c33c" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6588c86656-wbjc9\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:11:54 crc kubenswrapper[5107]: I0220 00:11:54.417446 5107 status_manager.go:895] "Failed to get status for pod" podUID="a285d85f-6695-407b-aed4-2050b9a32b34" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:11:54 crc kubenswrapper[5107]: I0220 00:11:54.491090 5107 status_manager.go:895] "Failed to get status for pod" podUID="a285d85f-6695-407b-aed4-2050b9a32b34" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:11:54 crc kubenswrapper[5107]: I0220 00:11:54.491575 5107 status_manager.go:895] "Failed to get status for pod" podUID="330407c1-176d-45cc-a46a-35809411c33c" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6588c86656-wbjc9\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:11:54 crc kubenswrapper[5107]: I0220 00:11:54.532648 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a285d85f-6695-407b-aed4-2050b9a32b34-kubelet-dir\") pod \"a285d85f-6695-407b-aed4-2050b9a32b34\" (UID: \"a285d85f-6695-407b-aed4-2050b9a32b34\") " Feb 20 00:11:54 crc kubenswrapper[5107]: I0220 00:11:54.532739 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a285d85f-6695-407b-aed4-2050b9a32b34-var-lock\") pod \"a285d85f-6695-407b-aed4-2050b9a32b34\" (UID: \"a285d85f-6695-407b-aed4-2050b9a32b34\") " Feb 20 00:11:54 crc kubenswrapper[5107]: I0220 00:11:54.532811 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a285d85f-6695-407b-aed4-2050b9a32b34-kube-api-access\") pod \"a285d85f-6695-407b-aed4-2050b9a32b34\" (UID: \"a285d85f-6695-407b-aed4-2050b9a32b34\") " Feb 20 00:11:54 crc kubenswrapper[5107]: I0220 00:11:54.546391 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a285d85f-6695-407b-aed4-2050b9a32b34-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a285d85f-6695-407b-aed4-2050b9a32b34" (UID: "a285d85f-6695-407b-aed4-2050b9a32b34"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:11:54 crc kubenswrapper[5107]: I0220 00:11:54.546511 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a285d85f-6695-407b-aed4-2050b9a32b34-var-lock" (OuterVolumeSpecName: "var-lock") pod "a285d85f-6695-407b-aed4-2050b9a32b34" (UID: "a285d85f-6695-407b-aed4-2050b9a32b34"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:11:54 crc kubenswrapper[5107]: I0220 00:11:54.552707 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a285d85f-6695-407b-aed4-2050b9a32b34-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a285d85f-6695-407b-aed4-2050b9a32b34" (UID: "a285d85f-6695-407b-aed4-2050b9a32b34"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:11:54 crc kubenswrapper[5107]: I0220 00:11:54.637263 5107 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a285d85f-6695-407b-aed4-2050b9a32b34-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:54 crc kubenswrapper[5107]: I0220 00:11:54.637306 5107 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a285d85f-6695-407b-aed4-2050b9a32b34-var-lock\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:54 crc kubenswrapper[5107]: I0220 00:11:54.637322 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a285d85f-6695-407b-aed4-2050b9a32b34-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:54 crc kubenswrapper[5107]: I0220 00:11:54.878479 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 20 00:11:54 crc kubenswrapper[5107]: I0220 00:11:54.879679 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:11:54 crc kubenswrapper[5107]: I0220 00:11:54.880453 5107 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:11:54 crc kubenswrapper[5107]: I0220 00:11:54.880947 5107 status_manager.go:895] "Failed to get status for pod" podUID="330407c1-176d-45cc-a46a-35809411c33c" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6588c86656-wbjc9\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:11:54 crc kubenswrapper[5107]: I0220 00:11:54.881520 5107 status_manager.go:895] "Failed to get status for pod" podUID="a285d85f-6695-407b-aed4-2050b9a32b34" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:11:54 crc kubenswrapper[5107]: I0220 00:11:54.940906 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 20 00:11:54 crc kubenswrapper[5107]: I0220 00:11:54.940984 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 20 00:11:54 crc kubenswrapper[5107]: I0220 00:11:54.941046 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 20 00:11:54 crc kubenswrapper[5107]: I0220 00:11:54.941096 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:11:54 crc kubenswrapper[5107]: I0220 00:11:54.941182 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 20 00:11:54 crc kubenswrapper[5107]: I0220 00:11:54.941271 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 20 00:11:54 crc kubenswrapper[5107]: I0220 00:11:54.941366 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:11:54 crc kubenswrapper[5107]: I0220 00:11:54.941489 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:11:54 crc kubenswrapper[5107]: I0220 00:11:54.941619 5107 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:54 crc kubenswrapper[5107]: I0220 00:11:54.941651 5107 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:54 crc kubenswrapper[5107]: I0220 00:11:54.941669 5107 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:54 crc kubenswrapper[5107]: I0220 00:11:54.941878 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:54 crc kubenswrapper[5107]: I0220 00:11:54.943115 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:11:55 crc kubenswrapper[5107]: I0220 00:11:55.043708 5107 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:55 crc kubenswrapper[5107]: I0220 00:11:55.043766 5107 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Feb 20 00:11:55 crc kubenswrapper[5107]: I0220 00:11:55.046974 5107 patch_prober.go:28] interesting pod/controller-manager-6588c86656-wbjc9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": context deadline exceeded" start-of-body= Feb 20 00:11:55 crc kubenswrapper[5107]: I0220 00:11:55.047050 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" podUID="330407c1-176d-45cc-a46a-35809411c33c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": context deadline exceeded" Feb 20 00:11:55 crc kubenswrapper[5107]: I0220 00:11:55.055165 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"a285d85f-6695-407b-aed4-2050b9a32b34","Type":"ContainerDied","Data":"a3b71ed5a3448b4b0c50726dff68cfe918d2a6cbbf249e30d68f58cd89c43ec3"} Feb 20 00:11:55 crc kubenswrapper[5107]: I0220 00:11:55.055202 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3b71ed5a3448b4b0c50726dff68cfe918d2a6cbbf249e30d68f58cd89c43ec3" Feb 20 00:11:55 crc kubenswrapper[5107]: I0220 00:11:55.055402 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 20 00:11:55 crc kubenswrapper[5107]: I0220 00:11:55.058735 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 20 00:11:55 crc kubenswrapper[5107]: I0220 00:11:55.059742 5107 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="7a92dad77499ef534f5d281fd3c3fc2e3d25902901a8f84b5a0a2ea296d9a91c" exitCode=0 Feb 20 00:11:55 crc kubenswrapper[5107]: I0220 00:11:55.059912 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:11:55 crc kubenswrapper[5107]: I0220 00:11:55.059944 5107 scope.go:117] "RemoveContainer" containerID="0860fb1002872b9679a5f53ddfb7bb7ea4bf0848b9a92fb159e28ca15ca7637a" Feb 20 00:11:55 crc kubenswrapper[5107]: I0220 00:11:55.104026 5107 status_manager.go:895] "Failed to get status for pod" podUID="a285d85f-6695-407b-aed4-2050b9a32b34" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:11:55 crc kubenswrapper[5107]: I0220 00:11:55.104541 5107 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:11:55 crc kubenswrapper[5107]: I0220 00:11:55.104962 5107 status_manager.go:895] "Failed to get status for pod" podUID="330407c1-176d-45cc-a46a-35809411c33c" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6588c86656-wbjc9\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:11:55 crc kubenswrapper[5107]: I0220 00:11:55.105401 5107 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:11:55 crc kubenswrapper[5107]: I0220 00:11:55.105789 5107 status_manager.go:895] "Failed to get status for pod" podUID="330407c1-176d-45cc-a46a-35809411c33c" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6588c86656-wbjc9\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:11:55 crc kubenswrapper[5107]: I0220 00:11:55.106162 5107 status_manager.go:895] "Failed to get status for pod" podUID="a285d85f-6695-407b-aed4-2050b9a32b34" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:11:55 crc kubenswrapper[5107]: I0220 00:11:55.110066 5107 scope.go:117] "RemoveContainer" containerID="750bdb9c855b0e7e5aab93bb7f41b27a004c22584c4a40c4bfa03679ec1a2533" Feb 20 00:11:55 crc kubenswrapper[5107]: I0220 00:11:55.125958 5107 scope.go:117] "RemoveContainer" containerID="b2c0ff440f5c8cffa2641d2bbd39fe8e52017724608d93d4b248240be6a0af24" Feb 20 00:11:55 crc kubenswrapper[5107]: I0220 00:11:55.147624 5107 scope.go:117] "RemoveContainer" containerID="21cc3154b74aece1eed247b85c80f9f2c7d77280e04711ce90bc778752a8738a" Feb 20 00:11:55 crc kubenswrapper[5107]: I0220 00:11:55.166818 5107 scope.go:117] "RemoveContainer" containerID="7a92dad77499ef534f5d281fd3c3fc2e3d25902901a8f84b5a0a2ea296d9a91c" Feb 20 00:11:55 crc kubenswrapper[5107]: I0220 00:11:55.187641 5107 scope.go:117] "RemoveContainer" containerID="923e75e5853dfb6b0bc6213ad0529fecf43cf5f9f3efceb7424aeeb52c870fae" Feb 20 00:11:55 crc kubenswrapper[5107]: I0220 00:11:55.252254 5107 scope.go:117] "RemoveContainer" containerID="0860fb1002872b9679a5f53ddfb7bb7ea4bf0848b9a92fb159e28ca15ca7637a" Feb 20 00:11:55 crc kubenswrapper[5107]: E0220 00:11:55.254815 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0860fb1002872b9679a5f53ddfb7bb7ea4bf0848b9a92fb159e28ca15ca7637a\": container with ID starting with 0860fb1002872b9679a5f53ddfb7bb7ea4bf0848b9a92fb159e28ca15ca7637a not found: ID does not exist" containerID="0860fb1002872b9679a5f53ddfb7bb7ea4bf0848b9a92fb159e28ca15ca7637a" Feb 20 00:11:55 crc kubenswrapper[5107]: I0220 00:11:55.254923 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0860fb1002872b9679a5f53ddfb7bb7ea4bf0848b9a92fb159e28ca15ca7637a"} err="failed to get container status \"0860fb1002872b9679a5f53ddfb7bb7ea4bf0848b9a92fb159e28ca15ca7637a\": rpc error: code = NotFound desc = could not find container \"0860fb1002872b9679a5f53ddfb7bb7ea4bf0848b9a92fb159e28ca15ca7637a\": container with ID starting with 0860fb1002872b9679a5f53ddfb7bb7ea4bf0848b9a92fb159e28ca15ca7637a not found: ID does not exist" Feb 20 00:11:55 crc kubenswrapper[5107]: I0220 00:11:55.255002 5107 scope.go:117] "RemoveContainer" containerID="750bdb9c855b0e7e5aab93bb7f41b27a004c22584c4a40c4bfa03679ec1a2533" Feb 20 00:11:55 crc kubenswrapper[5107]: E0220 00:11:55.255769 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"750bdb9c855b0e7e5aab93bb7f41b27a004c22584c4a40c4bfa03679ec1a2533\": container with ID starting with 750bdb9c855b0e7e5aab93bb7f41b27a004c22584c4a40c4bfa03679ec1a2533 not found: ID does not exist" containerID="750bdb9c855b0e7e5aab93bb7f41b27a004c22584c4a40c4bfa03679ec1a2533" Feb 20 00:11:55 crc kubenswrapper[5107]: I0220 00:11:55.255851 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"750bdb9c855b0e7e5aab93bb7f41b27a004c22584c4a40c4bfa03679ec1a2533"} err="failed to get container status \"750bdb9c855b0e7e5aab93bb7f41b27a004c22584c4a40c4bfa03679ec1a2533\": rpc error: code = NotFound desc = could not find container \"750bdb9c855b0e7e5aab93bb7f41b27a004c22584c4a40c4bfa03679ec1a2533\": container with ID starting with 750bdb9c855b0e7e5aab93bb7f41b27a004c22584c4a40c4bfa03679ec1a2533 not found: ID does not exist" Feb 20 00:11:55 crc kubenswrapper[5107]: I0220 00:11:55.255892 5107 scope.go:117] "RemoveContainer" containerID="b2c0ff440f5c8cffa2641d2bbd39fe8e52017724608d93d4b248240be6a0af24" Feb 20 00:11:55 crc kubenswrapper[5107]: E0220 00:11:55.256703 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2c0ff440f5c8cffa2641d2bbd39fe8e52017724608d93d4b248240be6a0af24\": container with ID starting with b2c0ff440f5c8cffa2641d2bbd39fe8e52017724608d93d4b248240be6a0af24 not found: ID does not exist" containerID="b2c0ff440f5c8cffa2641d2bbd39fe8e52017724608d93d4b248240be6a0af24" Feb 20 00:11:55 crc kubenswrapper[5107]: I0220 00:11:55.256744 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2c0ff440f5c8cffa2641d2bbd39fe8e52017724608d93d4b248240be6a0af24"} err="failed to get container status \"b2c0ff440f5c8cffa2641d2bbd39fe8e52017724608d93d4b248240be6a0af24\": rpc error: code = NotFound desc = could not find container \"b2c0ff440f5c8cffa2641d2bbd39fe8e52017724608d93d4b248240be6a0af24\": container with ID starting with b2c0ff440f5c8cffa2641d2bbd39fe8e52017724608d93d4b248240be6a0af24 not found: ID does not exist" Feb 20 00:11:55 crc kubenswrapper[5107]: I0220 00:11:55.256772 5107 scope.go:117] "RemoveContainer" containerID="21cc3154b74aece1eed247b85c80f9f2c7d77280e04711ce90bc778752a8738a" Feb 20 00:11:55 crc kubenswrapper[5107]: E0220 00:11:55.257205 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21cc3154b74aece1eed247b85c80f9f2c7d77280e04711ce90bc778752a8738a\": container with ID starting with 21cc3154b74aece1eed247b85c80f9f2c7d77280e04711ce90bc778752a8738a not found: ID does not exist" containerID="21cc3154b74aece1eed247b85c80f9f2c7d77280e04711ce90bc778752a8738a" Feb 20 00:11:55 crc kubenswrapper[5107]: I0220 00:11:55.257254 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21cc3154b74aece1eed247b85c80f9f2c7d77280e04711ce90bc778752a8738a"} err="failed to get container status \"21cc3154b74aece1eed247b85c80f9f2c7d77280e04711ce90bc778752a8738a\": rpc error: code = NotFound desc = could not find container \"21cc3154b74aece1eed247b85c80f9f2c7d77280e04711ce90bc778752a8738a\": container with ID starting with 21cc3154b74aece1eed247b85c80f9f2c7d77280e04711ce90bc778752a8738a not found: ID does not exist" Feb 20 00:11:55 crc kubenswrapper[5107]: I0220 00:11:55.257289 5107 scope.go:117] "RemoveContainer" containerID="7a92dad77499ef534f5d281fd3c3fc2e3d25902901a8f84b5a0a2ea296d9a91c" Feb 20 00:11:55 crc kubenswrapper[5107]: E0220 00:11:55.257735 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a92dad77499ef534f5d281fd3c3fc2e3d25902901a8f84b5a0a2ea296d9a91c\": container with ID starting with 7a92dad77499ef534f5d281fd3c3fc2e3d25902901a8f84b5a0a2ea296d9a91c not found: ID does not exist" containerID="7a92dad77499ef534f5d281fd3c3fc2e3d25902901a8f84b5a0a2ea296d9a91c" Feb 20 00:11:55 crc kubenswrapper[5107]: I0220 00:11:55.258135 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a92dad77499ef534f5d281fd3c3fc2e3d25902901a8f84b5a0a2ea296d9a91c"} err="failed to get container status \"7a92dad77499ef534f5d281fd3c3fc2e3d25902901a8f84b5a0a2ea296d9a91c\": rpc error: code = NotFound desc = could not find container \"7a92dad77499ef534f5d281fd3c3fc2e3d25902901a8f84b5a0a2ea296d9a91c\": container with ID starting with 7a92dad77499ef534f5d281fd3c3fc2e3d25902901a8f84b5a0a2ea296d9a91c not found: ID does not exist" Feb 20 00:11:55 crc kubenswrapper[5107]: I0220 00:11:55.258499 5107 scope.go:117] "RemoveContainer" containerID="923e75e5853dfb6b0bc6213ad0529fecf43cf5f9f3efceb7424aeeb52c870fae" Feb 20 00:11:55 crc kubenswrapper[5107]: E0220 00:11:55.259249 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"923e75e5853dfb6b0bc6213ad0529fecf43cf5f9f3efceb7424aeeb52c870fae\": container with ID starting with 923e75e5853dfb6b0bc6213ad0529fecf43cf5f9f3efceb7424aeeb52c870fae not found: ID does not exist" containerID="923e75e5853dfb6b0bc6213ad0529fecf43cf5f9f3efceb7424aeeb52c870fae" Feb 20 00:11:55 crc kubenswrapper[5107]: I0220 00:11:55.259297 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"923e75e5853dfb6b0bc6213ad0529fecf43cf5f9f3efceb7424aeeb52c870fae"} err="failed to get container status \"923e75e5853dfb6b0bc6213ad0529fecf43cf5f9f3efceb7424aeeb52c870fae\": rpc error: code = NotFound desc = could not find container \"923e75e5853dfb6b0bc6213ad0529fecf43cf5f9f3efceb7424aeeb52c870fae\": container with ID starting with 923e75e5853dfb6b0bc6213ad0529fecf43cf5f9f3efceb7424aeeb52c870fae not found: ID does not exist" Feb 20 00:11:56 crc kubenswrapper[5107]: I0220 00:11:56.060538 5107 patch_prober.go:28] interesting pod/controller-manager-6588c86656-wbjc9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 20 00:11:56 crc kubenswrapper[5107]: I0220 00:11:56.060886 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" podUID="330407c1-176d-45cc-a46a-35809411c33c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 20 00:11:56 crc kubenswrapper[5107]: I0220 00:11:56.495201 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Feb 20 00:11:56 crc kubenswrapper[5107]: E0220 00:11:56.526351 5107 desired_state_of_world_populator.go:305] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.180:6443: connect: connection refused" pod="openshift-image-registry/image-registry-66587d64c8-7txlk" volumeName="registry-storage" Feb 20 00:11:57 crc kubenswrapper[5107]: E0220 00:11:57.508405 5107 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.180:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:11:57 crc kubenswrapper[5107]: I0220 00:11:57.508861 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:11:57 crc kubenswrapper[5107]: W0220 00:11:57.530818 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7dbc7e1ee9c187a863ef9b473fad27b.slice/crio-bbfd1fa2cceb6160edab6e4e5c8a2f64808e84b7abc7f4a826d99028cf2ffad0 WatchSource:0}: Error finding container bbfd1fa2cceb6160edab6e4e5c8a2f64808e84b7abc7f4a826d99028cf2ffad0: Status 404 returned error can't find the container with id bbfd1fa2cceb6160edab6e4e5c8a2f64808e84b7abc7f4a826d99028cf2ffad0 Feb 20 00:11:58 crc kubenswrapper[5107]: I0220 00:11:58.082540 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"e1c9ceaba200040b8729b7182ab2b41b7613a3f01b506b707d157feec57c64a0"} Feb 20 00:11:58 crc kubenswrapper[5107]: I0220 00:11:58.082912 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"bbfd1fa2cceb6160edab6e4e5c8a2f64808e84b7abc7f4a826d99028cf2ffad0"} Feb 20 00:11:58 crc kubenswrapper[5107]: I0220 00:11:58.083196 5107 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:11:58 crc kubenswrapper[5107]: I0220 00:11:58.083892 5107 status_manager.go:895] "Failed to get status for pod" podUID="330407c1-176d-45cc-a46a-35809411c33c" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6588c86656-wbjc9\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:11:58 crc kubenswrapper[5107]: E0220 00:11:58.083963 5107 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.180:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:11:58 crc kubenswrapper[5107]: I0220 00:11:58.084298 5107 status_manager.go:895] "Failed to get status for pod" podUID="a285d85f-6695-407b-aed4-2050b9a32b34" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:11:58 crc kubenswrapper[5107]: E0220 00:11:58.638254 5107 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:11:58 crc kubenswrapper[5107]: E0220 00:11:58.638567 5107 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:11:58 crc kubenswrapper[5107]: E0220 00:11:58.639257 5107 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:11:58 crc kubenswrapper[5107]: E0220 00:11:58.639722 5107 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:11:58 crc kubenswrapper[5107]: E0220 00:11:58.640025 5107 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:11:58 crc kubenswrapper[5107]: I0220 00:11:58.640048 5107 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 20 00:11:58 crc kubenswrapper[5107]: E0220 00:11:58.640333 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="200ms" Feb 20 00:11:58 crc kubenswrapper[5107]: E0220 00:11:58.840769 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="400ms" Feb 20 00:11:59 crc kubenswrapper[5107]: E0220 00:11:59.242130 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="800ms" Feb 20 00:12:00 crc kubenswrapper[5107]: E0220 00:12:00.043812 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="1.6s" Feb 20 00:12:00 crc kubenswrapper[5107]: I0220 00:12:00.695953 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" podUID="b7763f2e-cc78-4dd1-a5d8-599e880ed627" containerName="oauth-openshift" containerID="cri-o://740565f68cb2dabe658102818d5b70eb0f91de426bd1df15e4b67d5b8543ec5b" gracePeriod=15 Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.104349 5107 generic.go:358] "Generic (PLEG): container finished" podID="b7763f2e-cc78-4dd1-a5d8-599e880ed627" containerID="740565f68cb2dabe658102818d5b70eb0f91de426bd1df15e4b67d5b8543ec5b" exitCode=0 Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.104443 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" event={"ID":"b7763f2e-cc78-4dd1-a5d8-599e880ed627","Type":"ContainerDied","Data":"740565f68cb2dabe658102818d5b70eb0f91de426bd1df15e4b67d5b8543ec5b"} Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.207211 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.207813 5107 status_manager.go:895] "Failed to get status for pod" podUID="330407c1-176d-45cc-a46a-35809411c33c" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6588c86656-wbjc9\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.208178 5107 status_manager.go:895] "Failed to get status for pod" podUID="b7763f2e-cc78-4dd1-a5d8-599e880ed627" pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-lp56s\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.208611 5107 status_manager.go:895] "Failed to get status for pod" podUID="a285d85f-6695-407b-aed4-2050b9a32b34" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.333316 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-router-certs\") pod \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.333362 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-ocp-branding-template\") pod \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.333405 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-user-template-provider-selection\") pod \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.333430 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6c8wx\" (UniqueName: \"kubernetes.io/projected/b7763f2e-cc78-4dd1-a5d8-599e880ed627-kube-api-access-6c8wx\") pod \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.333472 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-serving-cert\") pod \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.333521 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7763f2e-cc78-4dd1-a5d8-599e880ed627-audit-dir\") pod \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.333742 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7763f2e-cc78-4dd1-a5d8-599e880ed627-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "b7763f2e-cc78-4dd1-a5d8-599e880ed627" (UID: "b7763f2e-cc78-4dd1-a5d8-599e880ed627"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.333960 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-trusted-ca-bundle\") pod \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.334067 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-service-ca\") pod \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.334113 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-user-idp-0-file-data\") pod \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.334210 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-cliconfig\") pod \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.334251 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-session\") pod \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.334305 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7763f2e-cc78-4dd1-a5d8-599e880ed627-audit-policies\") pod \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.334371 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-user-template-error\") pod \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.334528 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-user-template-login\") pod \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\" (UID: \"b7763f2e-cc78-4dd1-a5d8-599e880ed627\") " Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.335087 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "b7763f2e-cc78-4dd1-a5d8-599e880ed627" (UID: "b7763f2e-cc78-4dd1-a5d8-599e880ed627"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.335154 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "b7763f2e-cc78-4dd1-a5d8-599e880ed627" (UID: "b7763f2e-cc78-4dd1-a5d8-599e880ed627"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.335300 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "b7763f2e-cc78-4dd1-a5d8-599e880ed627" (UID: "b7763f2e-cc78-4dd1-a5d8-599e880ed627"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.335350 5107 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7763f2e-cc78-4dd1-a5d8-599e880ed627-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.335390 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.335416 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.335405 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7763f2e-cc78-4dd1-a5d8-599e880ed627-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "b7763f2e-cc78-4dd1-a5d8-599e880ed627" (UID: "b7763f2e-cc78-4dd1-a5d8-599e880ed627"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.340569 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "b7763f2e-cc78-4dd1-a5d8-599e880ed627" (UID: "b7763f2e-cc78-4dd1-a5d8-599e880ed627"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.341078 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "b7763f2e-cc78-4dd1-a5d8-599e880ed627" (UID: "b7763f2e-cc78-4dd1-a5d8-599e880ed627"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.341288 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "b7763f2e-cc78-4dd1-a5d8-599e880ed627" (UID: "b7763f2e-cc78-4dd1-a5d8-599e880ed627"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.342115 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "b7763f2e-cc78-4dd1-a5d8-599e880ed627" (UID: "b7763f2e-cc78-4dd1-a5d8-599e880ed627"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.342225 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7763f2e-cc78-4dd1-a5d8-599e880ed627-kube-api-access-6c8wx" (OuterVolumeSpecName: "kube-api-access-6c8wx") pod "b7763f2e-cc78-4dd1-a5d8-599e880ed627" (UID: "b7763f2e-cc78-4dd1-a5d8-599e880ed627"). InnerVolumeSpecName "kube-api-access-6c8wx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.342834 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "b7763f2e-cc78-4dd1-a5d8-599e880ed627" (UID: "b7763f2e-cc78-4dd1-a5d8-599e880ed627"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.343183 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "b7763f2e-cc78-4dd1-a5d8-599e880ed627" (UID: "b7763f2e-cc78-4dd1-a5d8-599e880ed627"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.344625 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "b7763f2e-cc78-4dd1-a5d8-599e880ed627" (UID: "b7763f2e-cc78-4dd1-a5d8-599e880ed627"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.350920 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "b7763f2e-cc78-4dd1-a5d8-599e880ed627" (UID: "b7763f2e-cc78-4dd1-a5d8-599e880ed627"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.437459 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.438010 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.438031 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.438051 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.438073 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.438095 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6c8wx\" (UniqueName: \"kubernetes.io/projected/b7763f2e-cc78-4dd1-a5d8-599e880ed627-kube-api-access-6c8wx\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.438114 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.438133 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.438173 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.438192 5107 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7763f2e-cc78-4dd1-a5d8-599e880ed627-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:01 crc kubenswrapper[5107]: I0220 00:12:01.438213 5107 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7763f2e-cc78-4dd1-a5d8-599e880ed627-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:01 crc kubenswrapper[5107]: E0220 00:12:01.645574 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="3.2s" Feb 20 00:12:02 crc kubenswrapper[5107]: I0220 00:12:02.116351 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" Feb 20 00:12:02 crc kubenswrapper[5107]: I0220 00:12:02.116379 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" event={"ID":"b7763f2e-cc78-4dd1-a5d8-599e880ed627","Type":"ContainerDied","Data":"51e6f06eeeaaa595acc8bbf33fe8264a5899a7962776f1c025de7c3ae71f6575"} Feb 20 00:12:02 crc kubenswrapper[5107]: I0220 00:12:02.116478 5107 scope.go:117] "RemoveContainer" containerID="740565f68cb2dabe658102818d5b70eb0f91de426bd1df15e4b67d5b8543ec5b" Feb 20 00:12:02 crc kubenswrapper[5107]: I0220 00:12:02.117455 5107 status_manager.go:895] "Failed to get status for pod" podUID="330407c1-176d-45cc-a46a-35809411c33c" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6588c86656-wbjc9\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:12:02 crc kubenswrapper[5107]: I0220 00:12:02.117846 5107 status_manager.go:895] "Failed to get status for pod" podUID="b7763f2e-cc78-4dd1-a5d8-599e880ed627" pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-lp56s\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:12:02 crc kubenswrapper[5107]: I0220 00:12:02.118254 5107 status_manager.go:895] "Failed to get status for pod" podUID="a285d85f-6695-407b-aed4-2050b9a32b34" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:12:02 crc kubenswrapper[5107]: I0220 00:12:02.147207 5107 status_manager.go:895] "Failed to get status for pod" podUID="330407c1-176d-45cc-a46a-35809411c33c" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6588c86656-wbjc9\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:12:02 crc kubenswrapper[5107]: I0220 00:12:02.148007 5107 status_manager.go:895] "Failed to get status for pod" podUID="b7763f2e-cc78-4dd1-a5d8-599e880ed627" pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-lp56s\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:12:02 crc kubenswrapper[5107]: I0220 00:12:02.148735 5107 status_manager.go:895] "Failed to get status for pod" podUID="a285d85f-6695-407b-aed4-2050b9a32b34" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:12:02 crc kubenswrapper[5107]: E0220 00:12:02.771864 5107 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/events/controller-manager-6588c86656-wbjc9.1895cbfb17429b47\": dial tcp 38.102.83.180:6443: connect: connection refused" event="&Event{ObjectMeta:{controller-manager-6588c86656-wbjc9.1895cbfb17429b47 openshift-controller-manager 39050 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager,Name:controller-manager-6588c86656-wbjc9,UID:330407c1-176d-45cc-a46a-35809411c33c,APIVersion:v1,ResourceVersion:39036,FieldPath:spec.containers{controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-20 00:11:47 +0000 UTC,LastTimestamp:2026-02-20 00:11:53.076891186 +0000 UTC m=+199.445548752,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 20 00:12:04 crc kubenswrapper[5107]: I0220 00:12:04.492284 5107 status_manager.go:895] "Failed to get status for pod" podUID="330407c1-176d-45cc-a46a-35809411c33c" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6588c86656-wbjc9\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:12:04 crc kubenswrapper[5107]: I0220 00:12:04.493208 5107 status_manager.go:895] "Failed to get status for pod" podUID="b7763f2e-cc78-4dd1-a5d8-599e880ed627" pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-lp56s\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:12:04 crc kubenswrapper[5107]: I0220 00:12:04.493793 5107 status_manager.go:895] "Failed to get status for pod" podUID="a285d85f-6695-407b-aed4-2050b9a32b34" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:12:04 crc kubenswrapper[5107]: E0220 00:12:04.846903 5107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="6.4s" Feb 20 00:12:05 crc kubenswrapper[5107]: I0220 00:12:05.574870 5107 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 20 00:12:05 crc kubenswrapper[5107]: I0220 00:12:05.574995 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 20 00:12:06 crc kubenswrapper[5107]: I0220 00:12:06.060941 5107 patch_prober.go:28] interesting pod/controller-manager-6588c86656-wbjc9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": context deadline exceeded" start-of-body= Feb 20 00:12:06 crc kubenswrapper[5107]: I0220 00:12:06.061091 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" podUID="330407c1-176d-45cc-a46a-35809411c33c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": context deadline exceeded" Feb 20 00:12:06 crc kubenswrapper[5107]: I0220 00:12:06.146346 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 20 00:12:06 crc kubenswrapper[5107]: I0220 00:12:06.146653 5107 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="016de0626e1bb48e4a214e94d6f0fbe89c072d510e904bc496f85ef33fa1ccbd" exitCode=1 Feb 20 00:12:06 crc kubenswrapper[5107]: I0220 00:12:06.146823 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"016de0626e1bb48e4a214e94d6f0fbe89c072d510e904bc496f85ef33fa1ccbd"} Feb 20 00:12:06 crc kubenswrapper[5107]: I0220 00:12:06.147850 5107 scope.go:117] "RemoveContainer" containerID="016de0626e1bb48e4a214e94d6f0fbe89c072d510e904bc496f85ef33fa1ccbd" Feb 20 00:12:06 crc kubenswrapper[5107]: I0220 00:12:06.148055 5107 status_manager.go:895] "Failed to get status for pod" podUID="330407c1-176d-45cc-a46a-35809411c33c" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6588c86656-wbjc9\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:12:06 crc kubenswrapper[5107]: I0220 00:12:06.148422 5107 status_manager.go:895] "Failed to get status for pod" podUID="b7763f2e-cc78-4dd1-a5d8-599e880ed627" pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-lp56s\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:12:06 crc kubenswrapper[5107]: I0220 00:12:06.148968 5107 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:12:06 crc kubenswrapper[5107]: I0220 00:12:06.150413 5107 status_manager.go:895] "Failed to get status for pod" podUID="a285d85f-6695-407b-aed4-2050b9a32b34" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:12:06 crc kubenswrapper[5107]: E0220 00:12:06.380189 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:12:06Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:12:06Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:12:06Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T00:12:06Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:12:06 crc kubenswrapper[5107]: E0220 00:12:06.381132 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:12:06 crc kubenswrapper[5107]: E0220 00:12:06.381454 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:12:06 crc kubenswrapper[5107]: E0220 00:12:06.381704 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:12:06 crc kubenswrapper[5107]: E0220 00:12:06.382132 5107 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:12:06 crc kubenswrapper[5107]: E0220 00:12:06.382192 5107 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 20 00:12:07 crc kubenswrapper[5107]: I0220 00:12:07.163277 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 20 00:12:07 crc kubenswrapper[5107]: I0220 00:12:07.163514 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"cfe1bc574828f0d5c214e54ff54e289264a7bfc4543d2608b091b67651b52c9c"} Feb 20 00:12:07 crc kubenswrapper[5107]: I0220 00:12:07.164946 5107 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:12:07 crc kubenswrapper[5107]: I0220 00:12:07.165886 5107 status_manager.go:895] "Failed to get status for pod" podUID="a285d85f-6695-407b-aed4-2050b9a32b34" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:12:07 crc kubenswrapper[5107]: I0220 00:12:07.166626 5107 status_manager.go:895] "Failed to get status for pod" podUID="330407c1-176d-45cc-a46a-35809411c33c" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6588c86656-wbjc9\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:12:07 crc kubenswrapper[5107]: I0220 00:12:07.167226 5107 status_manager.go:895] "Failed to get status for pod" podUID="b7763f2e-cc78-4dd1-a5d8-599e880ed627" pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-lp56s\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:12:07 crc kubenswrapper[5107]: I0220 00:12:07.486282 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:12:07 crc kubenswrapper[5107]: I0220 00:12:07.487582 5107 status_manager.go:895] "Failed to get status for pod" podUID="330407c1-176d-45cc-a46a-35809411c33c" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6588c86656-wbjc9\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:12:07 crc kubenswrapper[5107]: I0220 00:12:07.488753 5107 status_manager.go:895] "Failed to get status for pod" podUID="b7763f2e-cc78-4dd1-a5d8-599e880ed627" pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-lp56s\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:12:07 crc kubenswrapper[5107]: I0220 00:12:07.489640 5107 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:12:07 crc kubenswrapper[5107]: I0220 00:12:07.490205 5107 status_manager.go:895] "Failed to get status for pod" podUID="a285d85f-6695-407b-aed4-2050b9a32b34" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:12:07 crc kubenswrapper[5107]: I0220 00:12:07.510111 5107 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ad017d0a-4200-4b1f-a03e-ae2b2b81ab8b" Feb 20 00:12:07 crc kubenswrapper[5107]: I0220 00:12:07.510206 5107 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ad017d0a-4200-4b1f-a03e-ae2b2b81ab8b" Feb 20 00:12:07 crc kubenswrapper[5107]: E0220 00:12:07.511777 5107 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:12:07 crc kubenswrapper[5107]: I0220 00:12:07.512228 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:12:07 crc kubenswrapper[5107]: W0220 00:12:07.543314 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57755cc5f99000cc11e193051474d4e2.slice/crio-158a3590a9781cdf337bb15821c0d7bdbd78ae3cfb2c484e2af10c7617725bfe WatchSource:0}: Error finding container 158a3590a9781cdf337bb15821c0d7bdbd78ae3cfb2c484e2af10c7617725bfe: Status 404 returned error can't find the container with id 158a3590a9781cdf337bb15821c0d7bdbd78ae3cfb2c484e2af10c7617725bfe Feb 20 00:12:08 crc kubenswrapper[5107]: I0220 00:12:08.175003 5107 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="8208a58c987f48aa9f2fbb5b780d4d5d7ad970ad6c346553927713576c0a92a9" exitCode=0 Feb 20 00:12:08 crc kubenswrapper[5107]: I0220 00:12:08.175189 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"8208a58c987f48aa9f2fbb5b780d4d5d7ad970ad6c346553927713576c0a92a9"} Feb 20 00:12:08 crc kubenswrapper[5107]: I0220 00:12:08.175264 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"158a3590a9781cdf337bb15821c0d7bdbd78ae3cfb2c484e2af10c7617725bfe"} Feb 20 00:12:08 crc kubenswrapper[5107]: I0220 00:12:08.175802 5107 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ad017d0a-4200-4b1f-a03e-ae2b2b81ab8b" Feb 20 00:12:08 crc kubenswrapper[5107]: I0220 00:12:08.175828 5107 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ad017d0a-4200-4b1f-a03e-ae2b2b81ab8b" Feb 20 00:12:08 crc kubenswrapper[5107]: I0220 00:12:08.176514 5107 status_manager.go:895] "Failed to get status for pod" podUID="330407c1-176d-45cc-a46a-35809411c33c" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6588c86656-wbjc9\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:12:08 crc kubenswrapper[5107]: E0220 00:12:08.176526 5107 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:12:08 crc kubenswrapper[5107]: I0220 00:12:08.176944 5107 status_manager.go:895] "Failed to get status for pod" podUID="b7763f2e-cc78-4dd1-a5d8-599e880ed627" pod="openshift-authentication/oauth-openshift-66458b6674-lp56s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-lp56s\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:12:08 crc kubenswrapper[5107]: I0220 00:12:08.177383 5107 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:12:08 crc kubenswrapper[5107]: I0220 00:12:08.177719 5107 status_manager.go:895] "Failed to get status for pod" podUID="a285d85f-6695-407b-aed4-2050b9a32b34" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.180:6443: connect: connection refused" Feb 20 00:12:09 crc kubenswrapper[5107]: I0220 00:12:09.183598 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"1d86e6776b638b26f4e5048461a14574dfd2776f52999e0904ae9a1d753319e0"} Feb 20 00:12:09 crc kubenswrapper[5107]: I0220 00:12:09.183926 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"5589cc1fcf22c317b5f327f7de8ceab726c4372a8cfaf9b21ba5fa3eab74af8d"} Feb 20 00:12:09 crc kubenswrapper[5107]: I0220 00:12:09.183944 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"a98aed6cb75543826967868f3d6ca5610f27985c9bcb44962c639f7e73d7328e"} Feb 20 00:12:10 crc kubenswrapper[5107]: I0220 00:12:10.191760 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"d6af7c914cd33028bd8b5cce7c8dc3d89f710b61024f6257c5e7792aabbce1ae"} Feb 20 00:12:10 crc kubenswrapper[5107]: I0220 00:12:10.191826 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"8eb19c9c084d75d8808cbda012b4f73a40308a079cf5474ddf0d40ba889d9173"} Feb 20 00:12:10 crc kubenswrapper[5107]: I0220 00:12:10.192228 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:12:10 crc kubenswrapper[5107]: I0220 00:12:10.192485 5107 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ad017d0a-4200-4b1f-a03e-ae2b2b81ab8b" Feb 20 00:12:10 crc kubenswrapper[5107]: I0220 00:12:10.192516 5107 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ad017d0a-4200-4b1f-a03e-ae2b2b81ab8b" Feb 20 00:12:12 crc kubenswrapper[5107]: I0220 00:12:12.512721 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:12:12 crc kubenswrapper[5107]: I0220 00:12:12.512777 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:12:12 crc kubenswrapper[5107]: I0220 00:12:12.521063 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:12:13 crc kubenswrapper[5107]: I0220 00:12:13.399561 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:12:13 crc kubenswrapper[5107]: I0220 00:12:13.405041 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:12:14 crc kubenswrapper[5107]: I0220 00:12:14.216471 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:12:15 crc kubenswrapper[5107]: I0220 00:12:15.359535 5107 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:12:15 crc kubenswrapper[5107]: I0220 00:12:15.359592 5107 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:12:15 crc kubenswrapper[5107]: I0220 00:12:15.542129 5107 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="e1b5bd17-c259-43e0-bb4d-e4ccb099d833" Feb 20 00:12:16 crc kubenswrapper[5107]: I0220 00:12:16.061129 5107 patch_prober.go:28] interesting pod/controller-manager-6588c86656-wbjc9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": context deadline exceeded" start-of-body= Feb 20 00:12:16 crc kubenswrapper[5107]: I0220 00:12:16.061213 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" podUID="330407c1-176d-45cc-a46a-35809411c33c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": context deadline exceeded" Feb 20 00:12:16 crc kubenswrapper[5107]: I0220 00:12:16.234898 5107 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ad017d0a-4200-4b1f-a03e-ae2b2b81ab8b" Feb 20 00:12:16 crc kubenswrapper[5107]: I0220 00:12:16.234941 5107 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ad017d0a-4200-4b1f-a03e-ae2b2b81ab8b" Feb 20 00:12:16 crc kubenswrapper[5107]: I0220 00:12:16.239246 5107 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="e1b5bd17-c259-43e0-bb4d-e4ccb099d833" Feb 20 00:12:16 crc kubenswrapper[5107]: I0220 00:12:16.243018 5107 status_manager.go:346] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://a98aed6cb75543826967868f3d6ca5610f27985c9bcb44962c639f7e73d7328e" Feb 20 00:12:16 crc kubenswrapper[5107]: I0220 00:12:16.243061 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:12:17 crc kubenswrapper[5107]: I0220 00:12:17.254584 5107 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ad017d0a-4200-4b1f-a03e-ae2b2b81ab8b" Feb 20 00:12:17 crc kubenswrapper[5107]: I0220 00:12:17.255321 5107 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ad017d0a-4200-4b1f-a03e-ae2b2b81ab8b" Feb 20 00:12:17 crc kubenswrapper[5107]: I0220 00:12:17.259432 5107 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="e1b5bd17-c259-43e0-bb4d-e4ccb099d833" Feb 20 00:12:24 crc kubenswrapper[5107]: I0220 00:12:24.308059 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-6588c86656-wbjc9_330407c1-176d-45cc-a46a-35809411c33c/controller-manager/1.log" Feb 20 00:12:24 crc kubenswrapper[5107]: I0220 00:12:24.310426 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-6588c86656-wbjc9_330407c1-176d-45cc-a46a-35809411c33c/controller-manager/0.log" Feb 20 00:12:24 crc kubenswrapper[5107]: I0220 00:12:24.310499 5107 generic.go:358] "Generic (PLEG): container finished" podID="330407c1-176d-45cc-a46a-35809411c33c" containerID="7ca243e4fea32ec8130c6fc67f325c30d67ecc17956b590d458cf0fe2fec50c5" exitCode=255 Feb 20 00:12:24 crc kubenswrapper[5107]: I0220 00:12:24.310677 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" event={"ID":"330407c1-176d-45cc-a46a-35809411c33c","Type":"ContainerDied","Data":"7ca243e4fea32ec8130c6fc67f325c30d67ecc17956b590d458cf0fe2fec50c5"} Feb 20 00:12:24 crc kubenswrapper[5107]: I0220 00:12:24.310807 5107 scope.go:117] "RemoveContainer" containerID="b7a7abac181a83bb3b63a00d3fc88e813e409f56e207afdb5fb95fdd5843c74f" Feb 20 00:12:24 crc kubenswrapper[5107]: I0220 00:12:24.312277 5107 scope.go:117] "RemoveContainer" containerID="7ca243e4fea32ec8130c6fc67f325c30d67ecc17956b590d458cf0fe2fec50c5" Feb 20 00:12:24 crc kubenswrapper[5107]: E0220 00:12:24.312814 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=controller-manager pod=controller-manager-6588c86656-wbjc9_openshift-controller-manager(330407c1-176d-45cc-a46a-35809411c33c)\"" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" podUID="330407c1-176d-45cc-a46a-35809411c33c" Feb 20 00:12:25 crc kubenswrapper[5107]: I0220 00:12:25.238952 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 20 00:12:25 crc kubenswrapper[5107]: I0220 00:12:25.319908 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-6588c86656-wbjc9_330407c1-176d-45cc-a46a-35809411c33c/controller-manager/1.log" Feb 20 00:12:25 crc kubenswrapper[5107]: I0220 00:12:25.642438 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Feb 20 00:12:25 crc kubenswrapper[5107]: I0220 00:12:25.741850 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Feb 20 00:12:25 crc kubenswrapper[5107]: I0220 00:12:25.832086 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Feb 20 00:12:26 crc kubenswrapper[5107]: I0220 00:12:26.208039 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Feb 20 00:12:26 crc kubenswrapper[5107]: I0220 00:12:26.273433 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Feb 20 00:12:26 crc kubenswrapper[5107]: I0220 00:12:26.317814 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Feb 20 00:12:26 crc kubenswrapper[5107]: I0220 00:12:26.494696 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Feb 20 00:12:26 crc kubenswrapper[5107]: I0220 00:12:26.496297 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Feb 20 00:12:26 crc kubenswrapper[5107]: I0220 00:12:26.706063 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Feb 20 00:12:26 crc kubenswrapper[5107]: I0220 00:12:26.910608 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Feb 20 00:12:26 crc kubenswrapper[5107]: I0220 00:12:26.917600 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Feb 20 00:12:27 crc kubenswrapper[5107]: I0220 00:12:27.191628 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Feb 20 00:12:27 crc kubenswrapper[5107]: I0220 00:12:27.351208 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Feb 20 00:12:27 crc kubenswrapper[5107]: I0220 00:12:27.414549 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Feb 20 00:12:27 crc kubenswrapper[5107]: I0220 00:12:27.485218 5107 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" Feb 20 00:12:27 crc kubenswrapper[5107]: I0220 00:12:27.487504 5107 scope.go:117] "RemoveContainer" containerID="7ca243e4fea32ec8130c6fc67f325c30d67ecc17956b590d458cf0fe2fec50c5" Feb 20 00:12:27 crc kubenswrapper[5107]: E0220 00:12:27.488573 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=controller-manager pod=controller-manager-6588c86656-wbjc9_openshift-controller-manager(330407c1-176d-45cc-a46a-35809411c33c)\"" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" podUID="330407c1-176d-45cc-a46a-35809411c33c" Feb 20 00:12:27 crc kubenswrapper[5107]: I0220 00:12:27.610722 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Feb 20 00:12:28 crc kubenswrapper[5107]: I0220 00:12:28.077600 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:12:28 crc kubenswrapper[5107]: I0220 00:12:28.189925 5107 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Feb 20 00:12:28 crc kubenswrapper[5107]: I0220 00:12:28.207457 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Feb 20 00:12:28 crc kubenswrapper[5107]: I0220 00:12:28.232601 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Feb 20 00:12:28 crc kubenswrapper[5107]: I0220 00:12:28.261418 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Feb 20 00:12:28 crc kubenswrapper[5107]: I0220 00:12:28.332129 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Feb 20 00:12:28 crc kubenswrapper[5107]: I0220 00:12:28.383859 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Feb 20 00:12:28 crc kubenswrapper[5107]: I0220 00:12:28.429506 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Feb 20 00:12:28 crc kubenswrapper[5107]: I0220 00:12:28.476858 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Feb 20 00:12:28 crc kubenswrapper[5107]: I0220 00:12:28.530227 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Feb 20 00:12:28 crc kubenswrapper[5107]: I0220 00:12:28.599901 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Feb 20 00:12:28 crc kubenswrapper[5107]: I0220 00:12:28.757788 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Feb 20 00:12:28 crc kubenswrapper[5107]: I0220 00:12:28.791875 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Feb 20 00:12:28 crc kubenswrapper[5107]: I0220 00:12:28.879919 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Feb 20 00:12:28 crc kubenswrapper[5107]: I0220 00:12:28.945371 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Feb 20 00:12:29 crc kubenswrapper[5107]: I0220 00:12:29.031219 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Feb 20 00:12:29 crc kubenswrapper[5107]: I0220 00:12:29.106188 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Feb 20 00:12:29 crc kubenswrapper[5107]: I0220 00:12:29.130487 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Feb 20 00:12:29 crc kubenswrapper[5107]: I0220 00:12:29.219898 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Feb 20 00:12:29 crc kubenswrapper[5107]: I0220 00:12:29.260963 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Feb 20 00:12:29 crc kubenswrapper[5107]: I0220 00:12:29.300407 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Feb 20 00:12:29 crc kubenswrapper[5107]: I0220 00:12:29.313299 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Feb 20 00:12:29 crc kubenswrapper[5107]: I0220 00:12:29.314984 5107 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Feb 20 00:12:29 crc kubenswrapper[5107]: I0220 00:12:29.338762 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Feb 20 00:12:29 crc kubenswrapper[5107]: I0220 00:12:29.445678 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Feb 20 00:12:29 crc kubenswrapper[5107]: I0220 00:12:29.470742 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Feb 20 00:12:29 crc kubenswrapper[5107]: I0220 00:12:29.814386 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Feb 20 00:12:29 crc kubenswrapper[5107]: I0220 00:12:29.834221 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Feb 20 00:12:29 crc kubenswrapper[5107]: I0220 00:12:29.864049 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Feb 20 00:12:29 crc kubenswrapper[5107]: I0220 00:12:29.910956 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Feb 20 00:12:29 crc kubenswrapper[5107]: I0220 00:12:29.971105 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:12:30 crc kubenswrapper[5107]: I0220 00:12:30.048991 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Feb 20 00:12:30 crc kubenswrapper[5107]: I0220 00:12:30.070759 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Feb 20 00:12:30 crc kubenswrapper[5107]: I0220 00:12:30.153257 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Feb 20 00:12:30 crc kubenswrapper[5107]: I0220 00:12:30.157492 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Feb 20 00:12:30 crc kubenswrapper[5107]: I0220 00:12:30.161951 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Feb 20 00:12:30 crc kubenswrapper[5107]: I0220 00:12:30.219220 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Feb 20 00:12:30 crc kubenswrapper[5107]: I0220 00:12:30.364976 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Feb 20 00:12:30 crc kubenswrapper[5107]: I0220 00:12:30.377472 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Feb 20 00:12:30 crc kubenswrapper[5107]: I0220 00:12:30.378947 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Feb 20 00:12:30 crc kubenswrapper[5107]: I0220 00:12:30.420459 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Feb 20 00:12:30 crc kubenswrapper[5107]: I0220 00:12:30.531317 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Feb 20 00:12:30 crc kubenswrapper[5107]: I0220 00:12:30.873226 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Feb 20 00:12:30 crc kubenswrapper[5107]: I0220 00:12:30.919176 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Feb 20 00:12:30 crc kubenswrapper[5107]: I0220 00:12:30.960194 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:12:31 crc kubenswrapper[5107]: I0220 00:12:31.125883 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Feb 20 00:12:31 crc kubenswrapper[5107]: I0220 00:12:31.193314 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Feb 20 00:12:31 crc kubenswrapper[5107]: I0220 00:12:31.428044 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Feb 20 00:12:31 crc kubenswrapper[5107]: I0220 00:12:31.552798 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Feb 20 00:12:31 crc kubenswrapper[5107]: I0220 00:12:31.873601 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Feb 20 00:12:31 crc kubenswrapper[5107]: I0220 00:12:31.937565 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Feb 20 00:12:31 crc kubenswrapper[5107]: I0220 00:12:31.950839 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Feb 20 00:12:31 crc kubenswrapper[5107]: I0220 00:12:31.962880 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Feb 20 00:12:31 crc kubenswrapper[5107]: I0220 00:12:31.992837 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Feb 20 00:12:32 crc kubenswrapper[5107]: I0220 00:12:32.096806 5107 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Feb 20 00:12:32 crc kubenswrapper[5107]: I0220 00:12:32.099684 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Feb 20 00:12:32 crc kubenswrapper[5107]: I0220 00:12:32.155744 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Feb 20 00:12:32 crc kubenswrapper[5107]: I0220 00:12:32.183958 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 20 00:12:32 crc kubenswrapper[5107]: I0220 00:12:32.190945 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Feb 20 00:12:32 crc kubenswrapper[5107]: I0220 00:12:32.270212 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Feb 20 00:12:32 crc kubenswrapper[5107]: I0220 00:12:32.283123 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Feb 20 00:12:32 crc kubenswrapper[5107]: I0220 00:12:32.343875 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Feb 20 00:12:32 crc kubenswrapper[5107]: I0220 00:12:32.449864 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Feb 20 00:12:32 crc kubenswrapper[5107]: I0220 00:12:32.463005 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Feb 20 00:12:32 crc kubenswrapper[5107]: I0220 00:12:32.518530 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Feb 20 00:12:32 crc kubenswrapper[5107]: I0220 00:12:32.542092 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Feb 20 00:12:32 crc kubenswrapper[5107]: I0220 00:12:32.618589 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Feb 20 00:12:32 crc kubenswrapper[5107]: I0220 00:12:32.825352 5107 patch_prober.go:28] interesting pod/machine-config-daemon-5bqkx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:12:32 crc kubenswrapper[5107]: I0220 00:12:32.825858 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:12:32 crc kubenswrapper[5107]: I0220 00:12:32.916352 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Feb 20 00:12:32 crc kubenswrapper[5107]: I0220 00:12:32.928376 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Feb 20 00:12:32 crc kubenswrapper[5107]: I0220 00:12:32.931751 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Feb 20 00:12:32 crc kubenswrapper[5107]: I0220 00:12:32.947290 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Feb 20 00:12:32 crc kubenswrapper[5107]: I0220 00:12:32.977565 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.043129 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.171835 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.204562 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.278274 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.280121 5107 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.285988 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-lp56s","openshift-kube-apiserver/kube-apiserver-crc"] Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.286058 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-5c9897548b-czzvs"] Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.286491 5107 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ad017d0a-4200-4b1f-a03e-ae2b2b81ab8b" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.286515 5107 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ad017d0a-4200-4b1f-a03e-ae2b2b81ab8b" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.287026 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b7763f2e-cc78-4dd1-a5d8-599e880ed627" containerName="oauth-openshift" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.287070 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7763f2e-cc78-4dd1-a5d8-599e880ed627" containerName="oauth-openshift" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.287105 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a285d85f-6695-407b-aed4-2050b9a32b34" containerName="installer" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.287114 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="a285d85f-6695-407b-aed4-2050b9a32b34" containerName="installer" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.287247 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="a285d85f-6695-407b-aed4-2050b9a32b34" containerName="installer" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.287277 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="b7763f2e-cc78-4dd1-a5d8-599e880ed627" containerName="oauth-openshift" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.294277 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.294336 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.295953 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.296172 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.296181 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.296558 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.296567 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.296887 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.296898 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.298636 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.299806 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.299835 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.300358 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.300636 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.300657 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.309326 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.311652 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.337846 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=18.337804257 podStartE2EDuration="18.337804257s" podCreationTimestamp="2026-02-20 00:12:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:33.332300091 +0000 UTC m=+239.700957657" watchObservedRunningTime="2026-02-20 00:12:33.337804257 +0000 UTC m=+239.706461843" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.343563 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.398120 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-v4-0-config-user-template-login\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.398216 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.398240 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.398257 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-v4-0-config-user-template-error\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.398344 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.398411 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-v4-0-config-system-service-ca\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.398533 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.398593 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-audit-policies\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.398631 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.398719 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-v4-0-config-system-router-certs\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.398755 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.398818 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-audit-dir\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.398866 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-v4-0-config-system-session\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.398949 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8fvw\" (UniqueName: \"kubernetes.io/projected/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-kube-api-access-p8fvw\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.418021 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.444652 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.465540 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.467158 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.500296 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.500372 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-audit-policies\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.500582 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.500792 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-v4-0-config-system-router-certs\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.500872 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.500973 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-audit-dir\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.501030 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-v4-0-config-system-session\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.501063 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p8fvw\" (UniqueName: \"kubernetes.io/projected/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-kube-api-access-p8fvw\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.501082 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-v4-0-config-user-template-login\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.501082 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-audit-dir\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.501166 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.501196 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.501241 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-v4-0-config-user-template-error\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.501266 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.501299 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-v4-0-config-system-service-ca\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.502161 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-v4-0-config-system-service-ca\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.502408 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.502714 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-audit-policies\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.502729 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.507890 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-v4-0-config-user-template-login\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.508448 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-v4-0-config-user-template-error\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.508645 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-v4-0-config-system-router-certs\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.508919 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-v4-0-config-system-session\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.509525 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.509765 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.512667 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.513309 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.513820 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.525114 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8fvw\" (UniqueName: \"kubernetes.io/projected/4494eaf7-6e76-4a39-80c1-8ef663be1eb1-kube-api-access-p8fvw\") pod \"oauth-openshift-5c9897548b-czzvs\" (UID: \"4494eaf7-6e76-4a39-80c1-8ef663be1eb1\") " pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.572261 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.614434 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.658462 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.697189 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.722497 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.726227 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.752574 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.787829 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.857619 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.881247 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.891793 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:12:33 crc kubenswrapper[5107]: I0220 00:12:33.950891 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Feb 20 00:12:34 crc kubenswrapper[5107]: I0220 00:12:34.008756 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Feb 20 00:12:34 crc kubenswrapper[5107]: I0220 00:12:34.025378 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Feb 20 00:12:34 crc kubenswrapper[5107]: I0220 00:12:34.071510 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Feb 20 00:12:34 crc kubenswrapper[5107]: I0220 00:12:34.095677 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Feb 20 00:12:34 crc kubenswrapper[5107]: I0220 00:12:34.116415 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Feb 20 00:12:34 crc kubenswrapper[5107]: I0220 00:12:34.133317 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Feb 20 00:12:34 crc kubenswrapper[5107]: I0220 00:12:34.176929 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Feb 20 00:12:34 crc kubenswrapper[5107]: I0220 00:12:34.196031 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Feb 20 00:12:34 crc kubenswrapper[5107]: I0220 00:12:34.316511 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Feb 20 00:12:34 crc kubenswrapper[5107]: I0220 00:12:34.321617 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Feb 20 00:12:34 crc kubenswrapper[5107]: I0220 00:12:34.330035 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5c9897548b-czzvs"] Feb 20 00:12:34 crc kubenswrapper[5107]: I0220 00:12:34.349622 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Feb 20 00:12:34 crc kubenswrapper[5107]: I0220 00:12:34.461824 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Feb 20 00:12:34 crc kubenswrapper[5107]: I0220 00:12:34.474906 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Feb 20 00:12:34 crc kubenswrapper[5107]: I0220 00:12:34.499901 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7763f2e-cc78-4dd1-a5d8-599e880ed627" path="/var/lib/kubelet/pods/b7763f2e-cc78-4dd1-a5d8-599e880ed627/volumes" Feb 20 00:12:34 crc kubenswrapper[5107]: I0220 00:12:34.591057 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Feb 20 00:12:34 crc kubenswrapper[5107]: I0220 00:12:34.618554 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Feb 20 00:12:34 crc kubenswrapper[5107]: I0220 00:12:34.746228 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Feb 20 00:12:34 crc kubenswrapper[5107]: I0220 00:12:34.756171 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Feb 20 00:12:34 crc kubenswrapper[5107]: I0220 00:12:34.868707 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Feb 20 00:12:34 crc kubenswrapper[5107]: I0220 00:12:34.893256 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Feb 20 00:12:34 crc kubenswrapper[5107]: I0220 00:12:34.913084 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Feb 20 00:12:34 crc kubenswrapper[5107]: I0220 00:12:34.967574 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Feb 20 00:12:35 crc kubenswrapper[5107]: I0220 00:12:35.027297 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Feb 20 00:12:35 crc kubenswrapper[5107]: I0220 00:12:35.258423 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Feb 20 00:12:35 crc kubenswrapper[5107]: I0220 00:12:35.329492 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Feb 20 00:12:35 crc kubenswrapper[5107]: I0220 00:12:35.332859 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Feb 20 00:12:35 crc kubenswrapper[5107]: I0220 00:12:35.353837 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Feb 20 00:12:35 crc kubenswrapper[5107]: I0220 00:12:35.363590 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Feb 20 00:12:35 crc kubenswrapper[5107]: I0220 00:12:35.411501 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Feb 20 00:12:35 crc kubenswrapper[5107]: I0220 00:12:35.489752 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Feb 20 00:12:35 crc kubenswrapper[5107]: I0220 00:12:35.499813 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Feb 20 00:12:35 crc kubenswrapper[5107]: I0220 00:12:35.513497 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:12:35 crc kubenswrapper[5107]: I0220 00:12:35.636053 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Feb 20 00:12:35 crc kubenswrapper[5107]: I0220 00:12:35.653021 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Feb 20 00:12:35 crc kubenswrapper[5107]: I0220 00:12:35.681018 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Feb 20 00:12:35 crc kubenswrapper[5107]: I0220 00:12:35.703629 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Feb 20 00:12:35 crc kubenswrapper[5107]: I0220 00:12:35.814045 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Feb 20 00:12:35 crc kubenswrapper[5107]: I0220 00:12:35.820227 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Feb 20 00:12:35 crc kubenswrapper[5107]: I0220 00:12:35.853420 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Feb 20 00:12:35 crc kubenswrapper[5107]: I0220 00:12:35.885686 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Feb 20 00:12:35 crc kubenswrapper[5107]: I0220 00:12:35.885765 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Feb 20 00:12:35 crc kubenswrapper[5107]: I0220 00:12:35.924602 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:12:36 crc kubenswrapper[5107]: I0220 00:12:36.009241 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Feb 20 00:12:36 crc kubenswrapper[5107]: I0220 00:12:36.039975 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Feb 20 00:12:36 crc kubenswrapper[5107]: I0220 00:12:36.063599 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Feb 20 00:12:36 crc kubenswrapper[5107]: I0220 00:12:36.071232 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Feb 20 00:12:36 crc kubenswrapper[5107]: I0220 00:12:36.154204 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Feb 20 00:12:36 crc kubenswrapper[5107]: I0220 00:12:36.217740 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Feb 20 00:12:36 crc kubenswrapper[5107]: I0220 00:12:36.238135 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Feb 20 00:12:36 crc kubenswrapper[5107]: I0220 00:12:36.280828 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Feb 20 00:12:36 crc kubenswrapper[5107]: I0220 00:12:36.302774 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Feb 20 00:12:36 crc kubenswrapper[5107]: I0220 00:12:36.389054 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Feb 20 00:12:36 crc kubenswrapper[5107]: W0220 00:12:36.399303 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4494eaf7_6e76_4a39_80c1_8ef663be1eb1.slice/crio-01bdc4bc4e362c0350007c701a46395a66b36262e28fb15ba683ab9baf778c9d WatchSource:0}: Error finding container 01bdc4bc4e362c0350007c701a46395a66b36262e28fb15ba683ab9baf778c9d: Status 404 returned error can't find the container with id 01bdc4bc4e362c0350007c701a46395a66b36262e28fb15ba683ab9baf778c9d Feb 20 00:12:36 crc kubenswrapper[5107]: I0220 00:12:36.565528 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Feb 20 00:12:36 crc kubenswrapper[5107]: I0220 00:12:36.606478 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Feb 20 00:12:36 crc kubenswrapper[5107]: I0220 00:12:36.697011 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Feb 20 00:12:36 crc kubenswrapper[5107]: I0220 00:12:36.743893 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Feb 20 00:12:36 crc kubenswrapper[5107]: I0220 00:12:36.818812 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Feb 20 00:12:36 crc kubenswrapper[5107]: I0220 00:12:36.884274 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Feb 20 00:12:37 crc kubenswrapper[5107]: I0220 00:12:37.024040 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Feb 20 00:12:37 crc kubenswrapper[5107]: I0220 00:12:37.151127 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Feb 20 00:12:37 crc kubenswrapper[5107]: I0220 00:12:37.259274 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Feb 20 00:12:37 crc kubenswrapper[5107]: I0220 00:12:37.265997 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Feb 20 00:12:37 crc kubenswrapper[5107]: I0220 00:12:37.408497 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" event={"ID":"4494eaf7-6e76-4a39-80c1-8ef663be1eb1","Type":"ContainerStarted","Data":"e674b0dd03c28bfbafa5f193c6506f1d9f2199d06ec211e3fa4121eff9bc51f9"} Feb 20 00:12:37 crc kubenswrapper[5107]: I0220 00:12:37.408617 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" event={"ID":"4494eaf7-6e76-4a39-80c1-8ef663be1eb1","Type":"ContainerStarted","Data":"01bdc4bc4e362c0350007c701a46395a66b36262e28fb15ba683ab9baf778c9d"} Feb 20 00:12:37 crc kubenswrapper[5107]: I0220 00:12:37.408760 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:37 crc kubenswrapper[5107]: I0220 00:12:37.417343 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" Feb 20 00:12:37 crc kubenswrapper[5107]: I0220 00:12:37.438383 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5c9897548b-czzvs" podStartSLOduration=62.438367735 podStartE2EDuration="1m2.438367735s" podCreationTimestamp="2026-02-20 00:11:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:37.43664436 +0000 UTC m=+243.805301956" watchObservedRunningTime="2026-02-20 00:12:37.438367735 +0000 UTC m=+243.807025301" Feb 20 00:12:37 crc kubenswrapper[5107]: I0220 00:12:37.478309 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:12:37 crc kubenswrapper[5107]: I0220 00:12:37.548270 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Feb 20 00:12:37 crc kubenswrapper[5107]: I0220 00:12:37.549584 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Feb 20 00:12:37 crc kubenswrapper[5107]: I0220 00:12:37.567209 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Feb 20 00:12:37 crc kubenswrapper[5107]: I0220 00:12:37.575056 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Feb 20 00:12:37 crc kubenswrapper[5107]: I0220 00:12:37.701069 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Feb 20 00:12:37 crc kubenswrapper[5107]: I0220 00:12:37.727294 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Feb 20 00:12:37 crc kubenswrapper[5107]: I0220 00:12:37.744536 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Feb 20 00:12:37 crc kubenswrapper[5107]: I0220 00:12:37.768577 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Feb 20 00:12:37 crc kubenswrapper[5107]: I0220 00:12:37.891808 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Feb 20 00:12:37 crc kubenswrapper[5107]: I0220 00:12:37.901982 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Feb 20 00:12:37 crc kubenswrapper[5107]: I0220 00:12:37.966323 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Feb 20 00:12:38 crc kubenswrapper[5107]: I0220 00:12:38.040077 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Feb 20 00:12:38 crc kubenswrapper[5107]: I0220 00:12:38.062094 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Feb 20 00:12:38 crc kubenswrapper[5107]: I0220 00:12:38.075654 5107 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 20 00:12:38 crc kubenswrapper[5107]: I0220 00:12:38.076022 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://e1c9ceaba200040b8729b7182ab2b41b7613a3f01b506b707d157feec57c64a0" gracePeriod=5 Feb 20 00:12:38 crc kubenswrapper[5107]: I0220 00:12:38.093342 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:12:38 crc kubenswrapper[5107]: I0220 00:12:38.093441 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Feb 20 00:12:38 crc kubenswrapper[5107]: I0220 00:12:38.150405 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:12:38 crc kubenswrapper[5107]: I0220 00:12:38.240166 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Feb 20 00:12:38 crc kubenswrapper[5107]: I0220 00:12:38.256552 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Feb 20 00:12:38 crc kubenswrapper[5107]: I0220 00:12:38.279799 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Feb 20 00:12:38 crc kubenswrapper[5107]: I0220 00:12:38.293659 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Feb 20 00:12:38 crc kubenswrapper[5107]: I0220 00:12:38.511504 5107 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Feb 20 00:12:38 crc kubenswrapper[5107]: I0220 00:12:38.581066 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Feb 20 00:12:38 crc kubenswrapper[5107]: I0220 00:12:38.599680 5107 ???:1] "http: TLS handshake error from 192.168.126.11:60096: no serving certificate available for the kubelet" Feb 20 00:12:38 crc kubenswrapper[5107]: I0220 00:12:38.632497 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Feb 20 00:12:38 crc kubenswrapper[5107]: I0220 00:12:38.675492 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Feb 20 00:12:38 crc kubenswrapper[5107]: I0220 00:12:38.698383 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Feb 20 00:12:38 crc kubenswrapper[5107]: I0220 00:12:38.779075 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 20 00:12:38 crc kubenswrapper[5107]: I0220 00:12:38.802056 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Feb 20 00:12:38 crc kubenswrapper[5107]: I0220 00:12:38.858976 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Feb 20 00:12:38 crc kubenswrapper[5107]: I0220 00:12:38.890921 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Feb 20 00:12:38 crc kubenswrapper[5107]: I0220 00:12:38.927411 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Feb 20 00:12:38 crc kubenswrapper[5107]: I0220 00:12:38.956005 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 20 00:12:39 crc kubenswrapper[5107]: I0220 00:12:39.002406 5107 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Feb 20 00:12:39 crc kubenswrapper[5107]: I0220 00:12:39.013589 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Feb 20 00:12:39 crc kubenswrapper[5107]: I0220 00:12:39.066909 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Feb 20 00:12:39 crc kubenswrapper[5107]: I0220 00:12:39.156482 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Feb 20 00:12:39 crc kubenswrapper[5107]: I0220 00:12:39.310048 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:12:39 crc kubenswrapper[5107]: I0220 00:12:39.369953 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:12:39 crc kubenswrapper[5107]: I0220 00:12:39.405762 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Feb 20 00:12:39 crc kubenswrapper[5107]: I0220 00:12:39.452529 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Feb 20 00:12:39 crc kubenswrapper[5107]: I0220 00:12:39.510824 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 20 00:12:39 crc kubenswrapper[5107]: I0220 00:12:39.563627 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Feb 20 00:12:39 crc kubenswrapper[5107]: I0220 00:12:39.566640 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Feb 20 00:12:39 crc kubenswrapper[5107]: I0220 00:12:39.702032 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Feb 20 00:12:39 crc kubenswrapper[5107]: I0220 00:12:39.724686 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Feb 20 00:12:39 crc kubenswrapper[5107]: I0220 00:12:39.749957 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Feb 20 00:12:39 crc kubenswrapper[5107]: I0220 00:12:39.816386 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Feb 20 00:12:39 crc kubenswrapper[5107]: I0220 00:12:39.838466 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Feb 20 00:12:39 crc kubenswrapper[5107]: I0220 00:12:39.905661 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Feb 20 00:12:40 crc kubenswrapper[5107]: I0220 00:12:40.025979 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Feb 20 00:12:40 crc kubenswrapper[5107]: I0220 00:12:40.085165 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Feb 20 00:12:40 crc kubenswrapper[5107]: I0220 00:12:40.181876 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Feb 20 00:12:40 crc kubenswrapper[5107]: I0220 00:12:40.190554 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Feb 20 00:12:40 crc kubenswrapper[5107]: I0220 00:12:40.258964 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Feb 20 00:12:40 crc kubenswrapper[5107]: I0220 00:12:40.261050 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Feb 20 00:12:40 crc kubenswrapper[5107]: I0220 00:12:40.396799 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Feb 20 00:12:40 crc kubenswrapper[5107]: I0220 00:12:40.425241 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Feb 20 00:12:40 crc kubenswrapper[5107]: I0220 00:12:40.495376 5107 scope.go:117] "RemoveContainer" containerID="7ca243e4fea32ec8130c6fc67f325c30d67ecc17956b590d458cf0fe2fec50c5" Feb 20 00:12:40 crc kubenswrapper[5107]: I0220 00:12:40.575705 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Feb 20 00:12:40 crc kubenswrapper[5107]: I0220 00:12:40.697649 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Feb 20 00:12:40 crc kubenswrapper[5107]: I0220 00:12:40.727107 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Feb 20 00:12:41 crc kubenswrapper[5107]: I0220 00:12:41.390701 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Feb 20 00:12:41 crc kubenswrapper[5107]: I0220 00:12:41.433220 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-6588c86656-wbjc9_330407c1-176d-45cc-a46a-35809411c33c/controller-manager/1.log" Feb 20 00:12:41 crc kubenswrapper[5107]: I0220 00:12:41.433283 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" event={"ID":"330407c1-176d-45cc-a46a-35809411c33c","Type":"ContainerStarted","Data":"53e1a4f3d3ec9ce597009a5da5e418607a3be98ed195760df9fc48815c4ac72f"} Feb 20 00:12:41 crc kubenswrapper[5107]: I0220 00:12:41.434360 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" Feb 20 00:12:41 crc kubenswrapper[5107]: I0220 00:12:41.440465 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" Feb 20 00:12:41 crc kubenswrapper[5107]: I0220 00:12:41.549733 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Feb 20 00:12:42 crc kubenswrapper[5107]: I0220 00:12:42.213887 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Feb 20 00:12:43 crc kubenswrapper[5107]: I0220 00:12:43.448821 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Feb 20 00:12:43 crc kubenswrapper[5107]: I0220 00:12:43.448868 5107 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="e1c9ceaba200040b8729b7182ab2b41b7613a3f01b506b707d157feec57c64a0" exitCode=137 Feb 20 00:12:43 crc kubenswrapper[5107]: I0220 00:12:43.684436 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Feb 20 00:12:43 crc kubenswrapper[5107]: I0220 00:12:43.684585 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:12:43 crc kubenswrapper[5107]: I0220 00:12:43.687378 5107 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Feb 20 00:12:43 crc kubenswrapper[5107]: I0220 00:12:43.785965 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 20 00:12:43 crc kubenswrapper[5107]: I0220 00:12:43.786062 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 20 00:12:43 crc kubenswrapper[5107]: I0220 00:12:43.786173 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 20 00:12:43 crc kubenswrapper[5107]: I0220 00:12:43.786228 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 20 00:12:43 crc kubenswrapper[5107]: I0220 00:12:43.786288 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 20 00:12:43 crc kubenswrapper[5107]: I0220 00:12:43.786123 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:12:43 crc kubenswrapper[5107]: I0220 00:12:43.786419 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:12:43 crc kubenswrapper[5107]: I0220 00:12:43.786358 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:12:43 crc kubenswrapper[5107]: I0220 00:12:43.786498 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:12:43 crc kubenswrapper[5107]: I0220 00:12:43.786834 5107 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:43 crc kubenswrapper[5107]: I0220 00:12:43.786867 5107 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:43 crc kubenswrapper[5107]: I0220 00:12:43.786883 5107 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:43 crc kubenswrapper[5107]: I0220 00:12:43.786900 5107 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:43 crc kubenswrapper[5107]: I0220 00:12:43.800807 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:12:43 crc kubenswrapper[5107]: I0220 00:12:43.888060 5107 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:44 crc kubenswrapper[5107]: I0220 00:12:44.458226 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Feb 20 00:12:44 crc kubenswrapper[5107]: I0220 00:12:44.458427 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 20 00:12:44 crc kubenswrapper[5107]: I0220 00:12:44.458449 5107 scope.go:117] "RemoveContainer" containerID="e1c9ceaba200040b8729b7182ab2b41b7613a3f01b506b707d157feec57c64a0" Feb 20 00:12:44 crc kubenswrapper[5107]: I0220 00:12:44.480381 5107 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Feb 20 00:12:44 crc kubenswrapper[5107]: I0220 00:12:44.494810 5107 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Feb 20 00:12:44 crc kubenswrapper[5107]: I0220 00:12:44.496407 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.418889 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6588c86656-wbjc9"] Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.419639 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" podUID="330407c1-176d-45cc-a46a-35809411c33c" containerName="controller-manager" containerID="cri-o://53e1a4f3d3ec9ce597009a5da5e418607a3be98ed195760df9fc48815c4ac72f" gracePeriod=30 Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.446717 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7df4547dbc-fmvgf"] Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.459680 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7df4547dbc-fmvgf" podUID="12c41778-313a-479a-8b45-5e0e9abef1cb" containerName="route-controller-manager" containerID="cri-o://00d82e708f4415a0c664a6413a2e273080a254576374e1d557feba7c5845d41f" gracePeriod=30 Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.895635 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-6588c86656-wbjc9_330407c1-176d-45cc-a46a-35809411c33c/controller-manager/1.log" Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.895957 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.901168 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7df4547dbc-fmvgf" Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.930259 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12c41778-313a-479a-8b45-5e0e9abef1cb-config\") pod \"12c41778-313a-479a-8b45-5e0e9abef1cb\" (UID: \"12c41778-313a-479a-8b45-5e0e9abef1cb\") " Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.930298 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/330407c1-176d-45cc-a46a-35809411c33c-config\") pod \"330407c1-176d-45cc-a46a-35809411c33c\" (UID: \"330407c1-176d-45cc-a46a-35809411c33c\") " Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.930321 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcmfd\" (UniqueName: \"kubernetes.io/projected/12c41778-313a-479a-8b45-5e0e9abef1cb-kube-api-access-wcmfd\") pod \"12c41778-313a-479a-8b45-5e0e9abef1cb\" (UID: \"12c41778-313a-479a-8b45-5e0e9abef1cb\") " Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.930374 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2phd\" (UniqueName: \"kubernetes.io/projected/330407c1-176d-45cc-a46a-35809411c33c-kube-api-access-c2phd\") pod \"330407c1-176d-45cc-a46a-35809411c33c\" (UID: \"330407c1-176d-45cc-a46a-35809411c33c\") " Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.930413 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/12c41778-313a-479a-8b45-5e0e9abef1cb-tmp\") pod \"12c41778-313a-479a-8b45-5e0e9abef1cb\" (UID: \"12c41778-313a-479a-8b45-5e0e9abef1cb\") " Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.930469 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12c41778-313a-479a-8b45-5e0e9abef1cb-serving-cert\") pod \"12c41778-313a-479a-8b45-5e0e9abef1cb\" (UID: \"12c41778-313a-479a-8b45-5e0e9abef1cb\") " Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.930536 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/330407c1-176d-45cc-a46a-35809411c33c-client-ca\") pod \"330407c1-176d-45cc-a46a-35809411c33c\" (UID: \"330407c1-176d-45cc-a46a-35809411c33c\") " Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.930561 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/330407c1-176d-45cc-a46a-35809411c33c-serving-cert\") pod \"330407c1-176d-45cc-a46a-35809411c33c\" (UID: \"330407c1-176d-45cc-a46a-35809411c33c\") " Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.930619 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12c41778-313a-479a-8b45-5e0e9abef1cb-client-ca\") pod \"12c41778-313a-479a-8b45-5e0e9abef1cb\" (UID: \"12c41778-313a-479a-8b45-5e0e9abef1cb\") " Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.930651 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/330407c1-176d-45cc-a46a-35809411c33c-tmp\") pod \"330407c1-176d-45cc-a46a-35809411c33c\" (UID: \"330407c1-176d-45cc-a46a-35809411c33c\") " Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.930670 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/330407c1-176d-45cc-a46a-35809411c33c-proxy-ca-bundles\") pod \"330407c1-176d-45cc-a46a-35809411c33c\" (UID: \"330407c1-176d-45cc-a46a-35809411c33c\") " Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.931379 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12c41778-313a-479a-8b45-5e0e9abef1cb-config" (OuterVolumeSpecName: "config") pod "12c41778-313a-479a-8b45-5e0e9abef1cb" (UID: "12c41778-313a-479a-8b45-5e0e9abef1cb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.931783 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/330407c1-176d-45cc-a46a-35809411c33c-client-ca" (OuterVolumeSpecName: "client-ca") pod "330407c1-176d-45cc-a46a-35809411c33c" (UID: "330407c1-176d-45cc-a46a-35809411c33c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.932114 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12c41778-313a-479a-8b45-5e0e9abef1cb-tmp" (OuterVolumeSpecName: "tmp") pod "12c41778-313a-479a-8b45-5e0e9abef1cb" (UID: "12c41778-313a-479a-8b45-5e0e9abef1cb"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.932312 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/330407c1-176d-45cc-a46a-35809411c33c-tmp" (OuterVolumeSpecName: "tmp") pod "330407c1-176d-45cc-a46a-35809411c33c" (UID: "330407c1-176d-45cc-a46a-35809411c33c"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.932829 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/330407c1-176d-45cc-a46a-35809411c33c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "330407c1-176d-45cc-a46a-35809411c33c" (UID: "330407c1-176d-45cc-a46a-35809411c33c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.932931 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/330407c1-176d-45cc-a46a-35809411c33c-config" (OuterVolumeSpecName: "config") pod "330407c1-176d-45cc-a46a-35809411c33c" (UID: "330407c1-176d-45cc-a46a-35809411c33c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.936264 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12c41778-313a-479a-8b45-5e0e9abef1cb-client-ca" (OuterVolumeSpecName: "client-ca") pod "12c41778-313a-479a-8b45-5e0e9abef1cb" (UID: "12c41778-313a-479a-8b45-5e0e9abef1cb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.937827 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-54cf5d9fd7-88vsn"] Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.939312 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="12c41778-313a-479a-8b45-5e0e9abef1cb" containerName="route-controller-manager" Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.939336 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="12c41778-313a-479a-8b45-5e0e9abef1cb" containerName="route-controller-manager" Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.939353 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.939360 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.939386 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="330407c1-176d-45cc-a46a-35809411c33c" containerName="controller-manager" Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.939395 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="330407c1-176d-45cc-a46a-35809411c33c" containerName="controller-manager" Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.939405 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="330407c1-176d-45cc-a46a-35809411c33c" containerName="controller-manager" Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.939413 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="330407c1-176d-45cc-a46a-35809411c33c" containerName="controller-manager" Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.939562 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.939578 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="330407c1-176d-45cc-a46a-35809411c33c" containerName="controller-manager" Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.939589 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="330407c1-176d-45cc-a46a-35809411c33c" containerName="controller-manager" Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.939603 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="12c41778-313a-479a-8b45-5e0e9abef1cb" containerName="route-controller-manager" Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.939913 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12c41778-313a-479a-8b45-5e0e9abef1cb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "12c41778-313a-479a-8b45-5e0e9abef1cb" (UID: "12c41778-313a-479a-8b45-5e0e9abef1cb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.941048 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/330407c1-176d-45cc-a46a-35809411c33c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "330407c1-176d-45cc-a46a-35809411c33c" (UID: "330407c1-176d-45cc-a46a-35809411c33c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.944348 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12c41778-313a-479a-8b45-5e0e9abef1cb-kube-api-access-wcmfd" (OuterVolumeSpecName: "kube-api-access-wcmfd") pod "12c41778-313a-479a-8b45-5e0e9abef1cb" (UID: "12c41778-313a-479a-8b45-5e0e9abef1cb"). InnerVolumeSpecName "kube-api-access-wcmfd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.946403 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/330407c1-176d-45cc-a46a-35809411c33c-kube-api-access-c2phd" (OuterVolumeSpecName: "kube-api-access-c2phd") pod "330407c1-176d-45cc-a46a-35809411c33c" (UID: "330407c1-176d-45cc-a46a-35809411c33c"). InnerVolumeSpecName "kube-api-access-c2phd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.949376 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54cf5d9fd7-88vsn"] Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.949519 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54cf5d9fd7-88vsn" Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.964668 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d96b79b56-v2992"] Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.967350 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="330407c1-176d-45cc-a46a-35809411c33c" containerName="controller-manager" Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.967386 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="330407c1-176d-45cc-a46a-35809411c33c" containerName="controller-manager" Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.967864 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="330407c1-176d-45cc-a46a-35809411c33c" containerName="controller-manager" Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.971362 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d96b79b56-v2992"] Feb 20 00:12:46 crc kubenswrapper[5107]: I0220 00:12:46.971482 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-v2992" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.031768 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1-client-ca\") pod \"route-controller-manager-5d96b79b56-v2992\" (UID: \"51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1\") " pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-v2992" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.031834 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhq9k\" (UniqueName: \"kubernetes.io/projected/51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1-kube-api-access-hhq9k\") pod \"route-controller-manager-5d96b79b56-v2992\" (UID: \"51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1\") " pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-v2992" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.031866 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7-proxy-ca-bundles\") pod \"controller-manager-54cf5d9fd7-88vsn\" (UID: \"c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7\") " pod="openshift-controller-manager/controller-manager-54cf5d9fd7-88vsn" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.031889 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzb6r\" (UniqueName: \"kubernetes.io/projected/c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7-kube-api-access-vzb6r\") pod \"controller-manager-54cf5d9fd7-88vsn\" (UID: \"c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7\") " pod="openshift-controller-manager/controller-manager-54cf5d9fd7-88vsn" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.031926 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1-serving-cert\") pod \"route-controller-manager-5d96b79b56-v2992\" (UID: \"51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1\") " pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-v2992" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.032019 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1-config\") pod \"route-controller-manager-5d96b79b56-v2992\" (UID: \"51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1\") " pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-v2992" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.032101 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7-client-ca\") pod \"controller-manager-54cf5d9fd7-88vsn\" (UID: \"c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7\") " pod="openshift-controller-manager/controller-manager-54cf5d9fd7-88vsn" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.032128 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1-tmp\") pod \"route-controller-manager-5d96b79b56-v2992\" (UID: \"51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1\") " pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-v2992" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.032166 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7-config\") pod \"controller-manager-54cf5d9fd7-88vsn\" (UID: \"c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7\") " pod="openshift-controller-manager/controller-manager-54cf5d9fd7-88vsn" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.032229 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7-tmp\") pod \"controller-manager-54cf5d9fd7-88vsn\" (UID: \"c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7\") " pod="openshift-controller-manager/controller-manager-54cf5d9fd7-88vsn" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.032352 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7-serving-cert\") pod \"controller-manager-54cf5d9fd7-88vsn\" (UID: \"c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7\") " pod="openshift-controller-manager/controller-manager-54cf5d9fd7-88vsn" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.032404 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12c41778-313a-479a-8b45-5e0e9abef1cb-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.032416 5107 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/330407c1-176d-45cc-a46a-35809411c33c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.032425 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/330407c1-176d-45cc-a46a-35809411c33c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.032451 5107 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12c41778-313a-479a-8b45-5e0e9abef1cb-client-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.032474 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/330407c1-176d-45cc-a46a-35809411c33c-tmp\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.032486 5107 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/330407c1-176d-45cc-a46a-35809411c33c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.032500 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12c41778-313a-479a-8b45-5e0e9abef1cb-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.032517 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/330407c1-176d-45cc-a46a-35809411c33c-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.032528 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wcmfd\" (UniqueName: \"kubernetes.io/projected/12c41778-313a-479a-8b45-5e0e9abef1cb-kube-api-access-wcmfd\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.032541 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c2phd\" (UniqueName: \"kubernetes.io/projected/330407c1-176d-45cc-a46a-35809411c33c-kube-api-access-c2phd\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.032552 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/12c41778-313a-479a-8b45-5e0e9abef1cb-tmp\") on node \"crc\" DevicePath \"\"" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.133631 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7-serving-cert\") pod \"controller-manager-54cf5d9fd7-88vsn\" (UID: \"c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7\") " pod="openshift-controller-manager/controller-manager-54cf5d9fd7-88vsn" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.133706 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1-client-ca\") pod \"route-controller-manager-5d96b79b56-v2992\" (UID: \"51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1\") " pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-v2992" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.133762 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hhq9k\" (UniqueName: \"kubernetes.io/projected/51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1-kube-api-access-hhq9k\") pod \"route-controller-manager-5d96b79b56-v2992\" (UID: \"51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1\") " pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-v2992" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.133808 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7-proxy-ca-bundles\") pod \"controller-manager-54cf5d9fd7-88vsn\" (UID: \"c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7\") " pod="openshift-controller-manager/controller-manager-54cf5d9fd7-88vsn" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.133842 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vzb6r\" (UniqueName: \"kubernetes.io/projected/c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7-kube-api-access-vzb6r\") pod \"controller-manager-54cf5d9fd7-88vsn\" (UID: \"c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7\") " pod="openshift-controller-manager/controller-manager-54cf5d9fd7-88vsn" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.134380 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1-serving-cert\") pod \"route-controller-manager-5d96b79b56-v2992\" (UID: \"51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1\") " pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-v2992" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.134456 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1-config\") pod \"route-controller-manager-5d96b79b56-v2992\" (UID: \"51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1\") " pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-v2992" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.134507 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7-client-ca\") pod \"controller-manager-54cf5d9fd7-88vsn\" (UID: \"c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7\") " pod="openshift-controller-manager/controller-manager-54cf5d9fd7-88vsn" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.134542 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1-tmp\") pod \"route-controller-manager-5d96b79b56-v2992\" (UID: \"51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1\") " pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-v2992" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.134577 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7-config\") pod \"controller-manager-54cf5d9fd7-88vsn\" (UID: \"c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7\") " pod="openshift-controller-manager/controller-manager-54cf5d9fd7-88vsn" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.134665 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7-tmp\") pod \"controller-manager-54cf5d9fd7-88vsn\" (UID: \"c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7\") " pod="openshift-controller-manager/controller-manager-54cf5d9fd7-88vsn" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.134797 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1-client-ca\") pod \"route-controller-manager-5d96b79b56-v2992\" (UID: \"51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1\") " pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-v2992" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.135297 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7-tmp\") pod \"controller-manager-54cf5d9fd7-88vsn\" (UID: \"c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7\") " pod="openshift-controller-manager/controller-manager-54cf5d9fd7-88vsn" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.135447 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7-client-ca\") pod \"controller-manager-54cf5d9fd7-88vsn\" (UID: \"c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7\") " pod="openshift-controller-manager/controller-manager-54cf5d9fd7-88vsn" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.135584 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1-tmp\") pod \"route-controller-manager-5d96b79b56-v2992\" (UID: \"51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1\") " pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-v2992" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.135898 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7-proxy-ca-bundles\") pod \"controller-manager-54cf5d9fd7-88vsn\" (UID: \"c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7\") " pod="openshift-controller-manager/controller-manager-54cf5d9fd7-88vsn" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.137418 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7-config\") pod \"controller-manager-54cf5d9fd7-88vsn\" (UID: \"c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7\") " pod="openshift-controller-manager/controller-manager-54cf5d9fd7-88vsn" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.140392 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1-config\") pod \"route-controller-manager-5d96b79b56-v2992\" (UID: \"51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1\") " pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-v2992" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.140762 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7-serving-cert\") pod \"controller-manager-54cf5d9fd7-88vsn\" (UID: \"c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7\") " pod="openshift-controller-manager/controller-manager-54cf5d9fd7-88vsn" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.148741 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1-serving-cert\") pod \"route-controller-manager-5d96b79b56-v2992\" (UID: \"51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1\") " pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-v2992" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.151999 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzb6r\" (UniqueName: \"kubernetes.io/projected/c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7-kube-api-access-vzb6r\") pod \"controller-manager-54cf5d9fd7-88vsn\" (UID: \"c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7\") " pod="openshift-controller-manager/controller-manager-54cf5d9fd7-88vsn" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.165923 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhq9k\" (UniqueName: \"kubernetes.io/projected/51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1-kube-api-access-hhq9k\") pod \"route-controller-manager-5d96b79b56-v2992\" (UID: \"51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1\") " pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-v2992" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.271743 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54cf5d9fd7-88vsn" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.286619 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-v2992" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.485081 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-6588c86656-wbjc9_330407c1-176d-45cc-a46a-35809411c33c/controller-manager/1.log" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.485128 5107 generic.go:358] "Generic (PLEG): container finished" podID="330407c1-176d-45cc-a46a-35809411c33c" containerID="53e1a4f3d3ec9ce597009a5da5e418607a3be98ed195760df9fc48815c4ac72f" exitCode=0 Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.485209 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" event={"ID":"330407c1-176d-45cc-a46a-35809411c33c","Type":"ContainerDied","Data":"53e1a4f3d3ec9ce597009a5da5e418607a3be98ed195760df9fc48815c4ac72f"} Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.485240 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" event={"ID":"330407c1-176d-45cc-a46a-35809411c33c","Type":"ContainerDied","Data":"d3568d743a64a4427e48403ea403c7160bd3626ef0445a9d57bce02227a2aa0d"} Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.485260 5107 scope.go:117] "RemoveContainer" containerID="53e1a4f3d3ec9ce597009a5da5e418607a3be98ed195760df9fc48815c4ac72f" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.485400 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6588c86656-wbjc9" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.492265 5107 generic.go:358] "Generic (PLEG): container finished" podID="12c41778-313a-479a-8b45-5e0e9abef1cb" containerID="00d82e708f4415a0c664a6413a2e273080a254576374e1d557feba7c5845d41f" exitCode=0 Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.492419 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7df4547dbc-fmvgf" event={"ID":"12c41778-313a-479a-8b45-5e0e9abef1cb","Type":"ContainerDied","Data":"00d82e708f4415a0c664a6413a2e273080a254576374e1d557feba7c5845d41f"} Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.492445 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7df4547dbc-fmvgf" event={"ID":"12c41778-313a-479a-8b45-5e0e9abef1cb","Type":"ContainerDied","Data":"668b34b1f3f267de4abffde906a506a37e379c25fb1a6525a956d9f4808efbad"} Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.492524 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7df4547dbc-fmvgf" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.513335 5107 scope.go:117] "RemoveContainer" containerID="7ca243e4fea32ec8130c6fc67f325c30d67ecc17956b590d458cf0fe2fec50c5" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.535916 5107 scope.go:117] "RemoveContainer" containerID="53e1a4f3d3ec9ce597009a5da5e418607a3be98ed195760df9fc48815c4ac72f" Feb 20 00:12:47 crc kubenswrapper[5107]: E0220 00:12:47.537761 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53e1a4f3d3ec9ce597009a5da5e418607a3be98ed195760df9fc48815c4ac72f\": container with ID starting with 53e1a4f3d3ec9ce597009a5da5e418607a3be98ed195760df9fc48815c4ac72f not found: ID does not exist" containerID="53e1a4f3d3ec9ce597009a5da5e418607a3be98ed195760df9fc48815c4ac72f" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.537826 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53e1a4f3d3ec9ce597009a5da5e418607a3be98ed195760df9fc48815c4ac72f"} err="failed to get container status \"53e1a4f3d3ec9ce597009a5da5e418607a3be98ed195760df9fc48815c4ac72f\": rpc error: code = NotFound desc = could not find container \"53e1a4f3d3ec9ce597009a5da5e418607a3be98ed195760df9fc48815c4ac72f\": container with ID starting with 53e1a4f3d3ec9ce597009a5da5e418607a3be98ed195760df9fc48815c4ac72f not found: ID does not exist" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.537855 5107 scope.go:117] "RemoveContainer" containerID="7ca243e4fea32ec8130c6fc67f325c30d67ecc17956b590d458cf0fe2fec50c5" Feb 20 00:12:47 crc kubenswrapper[5107]: E0220 00:12:47.538294 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ca243e4fea32ec8130c6fc67f325c30d67ecc17956b590d458cf0fe2fec50c5\": container with ID starting with 7ca243e4fea32ec8130c6fc67f325c30d67ecc17956b590d458cf0fe2fec50c5 not found: ID does not exist" containerID="7ca243e4fea32ec8130c6fc67f325c30d67ecc17956b590d458cf0fe2fec50c5" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.538321 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ca243e4fea32ec8130c6fc67f325c30d67ecc17956b590d458cf0fe2fec50c5"} err="failed to get container status \"7ca243e4fea32ec8130c6fc67f325c30d67ecc17956b590d458cf0fe2fec50c5\": rpc error: code = NotFound desc = could not find container \"7ca243e4fea32ec8130c6fc67f325c30d67ecc17956b590d458cf0fe2fec50c5\": container with ID starting with 7ca243e4fea32ec8130c6fc67f325c30d67ecc17956b590d458cf0fe2fec50c5 not found: ID does not exist" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.538337 5107 scope.go:117] "RemoveContainer" containerID="00d82e708f4415a0c664a6413a2e273080a254576374e1d557feba7c5845d41f" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.538795 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6588c86656-wbjc9"] Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.542618 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6588c86656-wbjc9"] Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.549955 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7df4547dbc-fmvgf"] Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.555307 5107 scope.go:117] "RemoveContainer" containerID="00d82e708f4415a0c664a6413a2e273080a254576374e1d557feba7c5845d41f" Feb 20 00:12:47 crc kubenswrapper[5107]: E0220 00:12:47.555688 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00d82e708f4415a0c664a6413a2e273080a254576374e1d557feba7c5845d41f\": container with ID starting with 00d82e708f4415a0c664a6413a2e273080a254576374e1d557feba7c5845d41f not found: ID does not exist" containerID="00d82e708f4415a0c664a6413a2e273080a254576374e1d557feba7c5845d41f" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.555711 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00d82e708f4415a0c664a6413a2e273080a254576374e1d557feba7c5845d41f"} err="failed to get container status \"00d82e708f4415a0c664a6413a2e273080a254576374e1d557feba7c5845d41f\": rpc error: code = NotFound desc = could not find container \"00d82e708f4415a0c664a6413a2e273080a254576374e1d557feba7c5845d41f\": container with ID starting with 00d82e708f4415a0c664a6413a2e273080a254576374e1d557feba7c5845d41f not found: ID does not exist" Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.556069 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7df4547dbc-fmvgf"] Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.579343 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54cf5d9fd7-88vsn"] Feb 20 00:12:47 crc kubenswrapper[5107]: I0220 00:12:47.749975 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d96b79b56-v2992"] Feb 20 00:12:47 crc kubenswrapper[5107]: W0220 00:12:47.755427 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51b5dbd1_d2c9_4bb6_9e3f_7ba10409e3a1.slice/crio-4178c8cd73a8649a74c25f691c2ad4d2a68e93e864248b68bc9f95ceaf5335a6 WatchSource:0}: Error finding container 4178c8cd73a8649a74c25f691c2ad4d2a68e93e864248b68bc9f95ceaf5335a6: Status 404 returned error can't find the container with id 4178c8cd73a8649a74c25f691c2ad4d2a68e93e864248b68bc9f95ceaf5335a6 Feb 20 00:12:48 crc kubenswrapper[5107]: I0220 00:12:48.506787 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12c41778-313a-479a-8b45-5e0e9abef1cb" path="/var/lib/kubelet/pods/12c41778-313a-479a-8b45-5e0e9abef1cb/volumes" Feb 20 00:12:48 crc kubenswrapper[5107]: I0220 00:12:48.508284 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="330407c1-176d-45cc-a46a-35809411c33c" path="/var/lib/kubelet/pods/330407c1-176d-45cc-a46a-35809411c33c/volumes" Feb 20 00:12:48 crc kubenswrapper[5107]: I0220 00:12:48.520316 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-v2992" event={"ID":"51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1","Type":"ContainerStarted","Data":"e2fb0426446fd01eb80ab0b591241e505cef70c2f4a7a78f5daad21aec586514"} Feb 20 00:12:48 crc kubenswrapper[5107]: I0220 00:12:48.520377 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-v2992" event={"ID":"51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1","Type":"ContainerStarted","Data":"4178c8cd73a8649a74c25f691c2ad4d2a68e93e864248b68bc9f95ceaf5335a6"} Feb 20 00:12:48 crc kubenswrapper[5107]: I0220 00:12:48.520609 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-v2992" Feb 20 00:12:48 crc kubenswrapper[5107]: I0220 00:12:48.522771 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54cf5d9fd7-88vsn" event={"ID":"c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7","Type":"ContainerStarted","Data":"2233a564007c02b03481c5846c9c6f8ecd7d9bc9425bac25c0022949445bd49d"} Feb 20 00:12:48 crc kubenswrapper[5107]: I0220 00:12:48.522823 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54cf5d9fd7-88vsn" event={"ID":"c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7","Type":"ContainerStarted","Data":"fb9df849449df7a6ecd73bf248fc583062e33531f3ff7a3cd065f27ac17d8331"} Feb 20 00:12:48 crc kubenswrapper[5107]: I0220 00:12:48.523183 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-54cf5d9fd7-88vsn" Feb 20 00:12:48 crc kubenswrapper[5107]: I0220 00:12:48.530702 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-54cf5d9fd7-88vsn" Feb 20 00:12:48 crc kubenswrapper[5107]: I0220 00:12:48.553909 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-v2992" podStartSLOduration=2.553885147 podStartE2EDuration="2.553885147s" podCreationTimestamp="2026-02-20 00:12:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:48.54868945 +0000 UTC m=+254.917347016" watchObservedRunningTime="2026-02-20 00:12:48.553885147 +0000 UTC m=+254.922542753" Feb 20 00:12:48 crc kubenswrapper[5107]: I0220 00:12:48.571977 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-54cf5d9fd7-88vsn" podStartSLOduration=2.571959407 podStartE2EDuration="2.571959407s" podCreationTimestamp="2026-02-20 00:12:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:12:48.566263234 +0000 UTC m=+254.934920830" watchObservedRunningTime="2026-02-20 00:12:48.571959407 +0000 UTC m=+254.940616973" Feb 20 00:12:48 crc kubenswrapper[5107]: I0220 00:12:48.755671 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-v2992" Feb 20 00:12:52 crc kubenswrapper[5107]: I0220 00:12:52.717951 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Feb 20 00:12:57 crc kubenswrapper[5107]: I0220 00:12:57.593291 5107 generic.go:358] "Generic (PLEG): container finished" podID="9833f13d-3814-43ad-afef-381d884e5950" containerID="3e90a833ccabef84d6f8a414df95fab479752a61eaa0f099c7b823171c0aa9d8" exitCode=0 Feb 20 00:12:57 crc kubenswrapper[5107]: I0220 00:12:57.593839 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-cvxll" event={"ID":"9833f13d-3814-43ad-afef-381d884e5950","Type":"ContainerDied","Data":"3e90a833ccabef84d6f8a414df95fab479752a61eaa0f099c7b823171c0aa9d8"} Feb 20 00:12:57 crc kubenswrapper[5107]: I0220 00:12:57.594858 5107 scope.go:117] "RemoveContainer" containerID="3e90a833ccabef84d6f8a414df95fab479752a61eaa0f099c7b823171c0aa9d8" Feb 20 00:12:58 crc kubenswrapper[5107]: I0220 00:12:58.603059 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-cvxll" event={"ID":"9833f13d-3814-43ad-afef-381d884e5950","Type":"ContainerStarted","Data":"5ca2faac67bd9a95ba1787bff583b54185d856ea9b2065afffa7e47b185f6d5c"} Feb 20 00:12:58 crc kubenswrapper[5107]: I0220 00:12:58.603798 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-cvxll" Feb 20 00:12:58 crc kubenswrapper[5107]: I0220 00:12:58.611374 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-cvxll" Feb 20 00:13:00 crc kubenswrapper[5107]: I0220 00:13:00.034658 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Feb 20 00:13:02 crc kubenswrapper[5107]: I0220 00:13:02.825351 5107 patch_prober.go:28] interesting pod/machine-config-daemon-5bqkx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:13:02 crc kubenswrapper[5107]: I0220 00:13:02.825499 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.418873 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-54cf5d9fd7-88vsn"] Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.419446 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-54cf5d9fd7-88vsn" podUID="c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7" containerName="controller-manager" containerID="cri-o://2233a564007c02b03481c5846c9c6f8ecd7d9bc9425bac25c0022949445bd49d" gracePeriod=30 Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.445201 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d96b79b56-v2992"] Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.446215 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-v2992" podUID="51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1" containerName="route-controller-manager" containerID="cri-o://e2fb0426446fd01eb80ab0b591241e505cef70c2f4a7a78f5daad21aec586514" gracePeriod=30 Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.660795 5107 generic.go:358] "Generic (PLEG): container finished" podID="51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1" containerID="e2fb0426446fd01eb80ab0b591241e505cef70c2f4a7a78f5daad21aec586514" exitCode=0 Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.660955 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-v2992" event={"ID":"51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1","Type":"ContainerDied","Data":"e2fb0426446fd01eb80ab0b591241e505cef70c2f4a7a78f5daad21aec586514"} Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.662040 5107 generic.go:358] "Generic (PLEG): container finished" podID="c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7" containerID="2233a564007c02b03481c5846c9c6f8ecd7d9bc9425bac25c0022949445bd49d" exitCode=0 Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.662078 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54cf5d9fd7-88vsn" event={"ID":"c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7","Type":"ContainerDied","Data":"2233a564007c02b03481c5846c9c6f8ecd7d9bc9425bac25c0022949445bd49d"} Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.716076 5107 ???:1] "http: TLS handshake error from 192.168.126.11:32874: no serving certificate available for the kubelet" Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.814706 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54cf5d9fd7-88vsn" Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.878627 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5f94b4dd44-pxgvx"] Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.879428 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7" containerName="controller-manager" Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.879446 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7" containerName="controller-manager" Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.879562 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7" containerName="controller-manager" Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.884409 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-v2992" Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.885928 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f94b4dd44-pxgvx"] Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.886070 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f94b4dd44-pxgvx" Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.915666 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84d6978b89-fcwdd"] Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.916454 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1" containerName="route-controller-manager" Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.916505 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1" containerName="route-controller-manager" Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.916666 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1" containerName="route-controller-manager" Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.916963 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1-serving-cert\") pod \"51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1\" (UID: \"51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1\") " Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.916996 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1-client-ca\") pod \"51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1\" (UID: \"51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1\") " Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.917018 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7-client-ca\") pod \"c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7\" (UID: \"c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7\") " Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.917044 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhq9k\" (UniqueName: \"kubernetes.io/projected/51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1-kube-api-access-hhq9k\") pod \"51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1\" (UID: \"51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1\") " Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.917078 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzb6r\" (UniqueName: \"kubernetes.io/projected/c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7-kube-api-access-vzb6r\") pod \"c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7\" (UID: \"c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7\") " Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.917104 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7-tmp\") pod \"c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7\" (UID: \"c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7\") " Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.917190 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7-serving-cert\") pod \"c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7\" (UID: \"c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7\") " Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.917225 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7-proxy-ca-bundles\") pod \"c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7\" (UID: \"c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7\") " Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.917257 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1-tmp\") pod \"51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1\" (UID: \"51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1\") " Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.917287 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1-config\") pod \"51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1\" (UID: \"51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1\") " Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.917331 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7-config\") pod \"c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7\" (UID: \"c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7\") " Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.917431 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7968202-8763-48c2-88c6-b58c629a7e4b-config\") pod \"controller-manager-5f94b4dd44-pxgvx\" (UID: \"d7968202-8763-48c2-88c6-b58c629a7e4b\") " pod="openshift-controller-manager/controller-manager-5f94b4dd44-pxgvx" Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.917459 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7968202-8763-48c2-88c6-b58c629a7e4b-client-ca\") pod \"controller-manager-5f94b4dd44-pxgvx\" (UID: \"d7968202-8763-48c2-88c6-b58c629a7e4b\") " pod="openshift-controller-manager/controller-manager-5f94b4dd44-pxgvx" Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.917508 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d7968202-8763-48c2-88c6-b58c629a7e4b-proxy-ca-bundles\") pod \"controller-manager-5f94b4dd44-pxgvx\" (UID: \"d7968202-8763-48c2-88c6-b58c629a7e4b\") " pod="openshift-controller-manager/controller-manager-5f94b4dd44-pxgvx" Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.917563 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm2lq\" (UniqueName: \"kubernetes.io/projected/d7968202-8763-48c2-88c6-b58c629a7e4b-kube-api-access-cm2lq\") pod \"controller-manager-5f94b4dd44-pxgvx\" (UID: \"d7968202-8763-48c2-88c6-b58c629a7e4b\") " pod="openshift-controller-manager/controller-manager-5f94b4dd44-pxgvx" Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.917590 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7968202-8763-48c2-88c6-b58c629a7e4b-serving-cert\") pod \"controller-manager-5f94b4dd44-pxgvx\" (UID: \"d7968202-8763-48c2-88c6-b58c629a7e4b\") " pod="openshift-controller-manager/controller-manager-5f94b4dd44-pxgvx" Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.917667 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d7968202-8763-48c2-88c6-b58c629a7e4b-tmp\") pod \"controller-manager-5f94b4dd44-pxgvx\" (UID: \"d7968202-8763-48c2-88c6-b58c629a7e4b\") " pod="openshift-controller-manager/controller-manager-5f94b4dd44-pxgvx" Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.917975 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1-client-ca" (OuterVolumeSpecName: "client-ca") pod "51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1" (UID: "51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.918835 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1-config" (OuterVolumeSpecName: "config") pod "51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1" (UID: "51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.918834 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7-client-ca" (OuterVolumeSpecName: "client-ca") pod "c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7" (UID: "c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.920714 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1-tmp" (OuterVolumeSpecName: "tmp") pod "51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1" (UID: "51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.921157 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7-tmp" (OuterVolumeSpecName: "tmp") pod "c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7" (UID: "c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.922496 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7" (UID: "c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.924130 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7-config" (OuterVolumeSpecName: "config") pod "c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7" (UID: "c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.924468 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7-kube-api-access-vzb6r" (OuterVolumeSpecName: "kube-api-access-vzb6r") pod "c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7" (UID: "c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7"). InnerVolumeSpecName "kube-api-access-vzb6r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.924657 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7" (UID: "c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.924831 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84d6978b89-fcwdd" Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.927303 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1" (UID: "51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.929236 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1-kube-api-access-hhq9k" (OuterVolumeSpecName: "kube-api-access-hhq9k") pod "51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1" (UID: "51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1"). InnerVolumeSpecName "kube-api-access-hhq9k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:13:06 crc kubenswrapper[5107]: I0220 00:13:06.933043 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84d6978b89-fcwdd"] Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.018791 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d7968202-8763-48c2-88c6-b58c629a7e4b-tmp\") pod \"controller-manager-5f94b4dd44-pxgvx\" (UID: \"d7968202-8763-48c2-88c6-b58c629a7e4b\") " pod="openshift-controller-manager/controller-manager-5f94b4dd44-pxgvx" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.018864 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7968202-8763-48c2-88c6-b58c629a7e4b-config\") pod \"controller-manager-5f94b4dd44-pxgvx\" (UID: \"d7968202-8763-48c2-88c6-b58c629a7e4b\") " pod="openshift-controller-manager/controller-manager-5f94b4dd44-pxgvx" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.019367 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d7968202-8763-48c2-88c6-b58c629a7e4b-tmp\") pod \"controller-manager-5f94b4dd44-pxgvx\" (UID: \"d7968202-8763-48c2-88c6-b58c629a7e4b\") " pod="openshift-controller-manager/controller-manager-5f94b4dd44-pxgvx" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.019915 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7968202-8763-48c2-88c6-b58c629a7e4b-client-ca\") pod \"controller-manager-5f94b4dd44-pxgvx\" (UID: \"d7968202-8763-48c2-88c6-b58c629a7e4b\") " pod="openshift-controller-manager/controller-manager-5f94b4dd44-pxgvx" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.019284 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7968202-8763-48c2-88c6-b58c629a7e4b-client-ca\") pod \"controller-manager-5f94b4dd44-pxgvx\" (UID: \"d7968202-8763-48c2-88c6-b58c629a7e4b\") " pod="openshift-controller-manager/controller-manager-5f94b4dd44-pxgvx" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.020254 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7968202-8763-48c2-88c6-b58c629a7e4b-config\") pod \"controller-manager-5f94b4dd44-pxgvx\" (UID: \"d7968202-8763-48c2-88c6-b58c629a7e4b\") " pod="openshift-controller-manager/controller-manager-5f94b4dd44-pxgvx" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.020421 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53af46bb-34e6-406b-b2b2-6745d2b0263c-client-ca\") pod \"route-controller-manager-84d6978b89-fcwdd\" (UID: \"53af46bb-34e6-406b-b2b2-6745d2b0263c\") " pod="openshift-route-controller-manager/route-controller-manager-84d6978b89-fcwdd" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.020481 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53af46bb-34e6-406b-b2b2-6745d2b0263c-serving-cert\") pod \"route-controller-manager-84d6978b89-fcwdd\" (UID: \"53af46bb-34e6-406b-b2b2-6745d2b0263c\") " pod="openshift-route-controller-manager/route-controller-manager-84d6978b89-fcwdd" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.020514 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d7968202-8763-48c2-88c6-b58c629a7e4b-proxy-ca-bundles\") pod \"controller-manager-5f94b4dd44-pxgvx\" (UID: \"d7968202-8763-48c2-88c6-b58c629a7e4b\") " pod="openshift-controller-manager/controller-manager-5f94b4dd44-pxgvx" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.020539 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7jsk\" (UniqueName: \"kubernetes.io/projected/53af46bb-34e6-406b-b2b2-6745d2b0263c-kube-api-access-n7jsk\") pod \"route-controller-manager-84d6978b89-fcwdd\" (UID: \"53af46bb-34e6-406b-b2b2-6745d2b0263c\") " pod="openshift-route-controller-manager/route-controller-manager-84d6978b89-fcwdd" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.020570 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53af46bb-34e6-406b-b2b2-6745d2b0263c-config\") pod \"route-controller-manager-84d6978b89-fcwdd\" (UID: \"53af46bb-34e6-406b-b2b2-6745d2b0263c\") " pod="openshift-route-controller-manager/route-controller-manager-84d6978b89-fcwdd" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.020589 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cm2lq\" (UniqueName: \"kubernetes.io/projected/d7968202-8763-48c2-88c6-b58c629a7e4b-kube-api-access-cm2lq\") pod \"controller-manager-5f94b4dd44-pxgvx\" (UID: \"d7968202-8763-48c2-88c6-b58c629a7e4b\") " pod="openshift-controller-manager/controller-manager-5f94b4dd44-pxgvx" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.020638 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7968202-8763-48c2-88c6-b58c629a7e4b-serving-cert\") pod \"controller-manager-5f94b4dd44-pxgvx\" (UID: \"d7968202-8763-48c2-88c6-b58c629a7e4b\") " pod="openshift-controller-manager/controller-manager-5f94b4dd44-pxgvx" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.020690 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/53af46bb-34e6-406b-b2b2-6745d2b0263c-tmp\") pod \"route-controller-manager-84d6978b89-fcwdd\" (UID: \"53af46bb-34e6-406b-b2b2-6745d2b0263c\") " pod="openshift-route-controller-manager/route-controller-manager-84d6978b89-fcwdd" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.020759 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.020780 5107 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.020798 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1-tmp\") on node \"crc\" DevicePath \"\"" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.020814 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.020831 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.020847 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.020862 5107 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1-client-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.020878 5107 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7-client-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.020894 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hhq9k\" (UniqueName: \"kubernetes.io/projected/51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1-kube-api-access-hhq9k\") on node \"crc\" DevicePath \"\"" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.020911 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vzb6r\" (UniqueName: \"kubernetes.io/projected/c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7-kube-api-access-vzb6r\") on node \"crc\" DevicePath \"\"" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.020926 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7-tmp\") on node \"crc\" DevicePath \"\"" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.022166 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d7968202-8763-48c2-88c6-b58c629a7e4b-proxy-ca-bundles\") pod \"controller-manager-5f94b4dd44-pxgvx\" (UID: \"d7968202-8763-48c2-88c6-b58c629a7e4b\") " pod="openshift-controller-manager/controller-manager-5f94b4dd44-pxgvx" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.025995 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7968202-8763-48c2-88c6-b58c629a7e4b-serving-cert\") pod \"controller-manager-5f94b4dd44-pxgvx\" (UID: \"d7968202-8763-48c2-88c6-b58c629a7e4b\") " pod="openshift-controller-manager/controller-manager-5f94b4dd44-pxgvx" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.037644 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm2lq\" (UniqueName: \"kubernetes.io/projected/d7968202-8763-48c2-88c6-b58c629a7e4b-kube-api-access-cm2lq\") pod \"controller-manager-5f94b4dd44-pxgvx\" (UID: \"d7968202-8763-48c2-88c6-b58c629a7e4b\") " pod="openshift-controller-manager/controller-manager-5f94b4dd44-pxgvx" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.122019 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53af46bb-34e6-406b-b2b2-6745d2b0263c-client-ca\") pod \"route-controller-manager-84d6978b89-fcwdd\" (UID: \"53af46bb-34e6-406b-b2b2-6745d2b0263c\") " pod="openshift-route-controller-manager/route-controller-manager-84d6978b89-fcwdd" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.122098 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53af46bb-34e6-406b-b2b2-6745d2b0263c-serving-cert\") pod \"route-controller-manager-84d6978b89-fcwdd\" (UID: \"53af46bb-34e6-406b-b2b2-6745d2b0263c\") " pod="openshift-route-controller-manager/route-controller-manager-84d6978b89-fcwdd" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.122185 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n7jsk\" (UniqueName: \"kubernetes.io/projected/53af46bb-34e6-406b-b2b2-6745d2b0263c-kube-api-access-n7jsk\") pod \"route-controller-manager-84d6978b89-fcwdd\" (UID: \"53af46bb-34e6-406b-b2b2-6745d2b0263c\") " pod="openshift-route-controller-manager/route-controller-manager-84d6978b89-fcwdd" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.122251 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53af46bb-34e6-406b-b2b2-6745d2b0263c-config\") pod \"route-controller-manager-84d6978b89-fcwdd\" (UID: \"53af46bb-34e6-406b-b2b2-6745d2b0263c\") " pod="openshift-route-controller-manager/route-controller-manager-84d6978b89-fcwdd" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.122295 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/53af46bb-34e6-406b-b2b2-6745d2b0263c-tmp\") pod \"route-controller-manager-84d6978b89-fcwdd\" (UID: \"53af46bb-34e6-406b-b2b2-6745d2b0263c\") " pod="openshift-route-controller-manager/route-controller-manager-84d6978b89-fcwdd" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.124034 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53af46bb-34e6-406b-b2b2-6745d2b0263c-config\") pod \"route-controller-manager-84d6978b89-fcwdd\" (UID: \"53af46bb-34e6-406b-b2b2-6745d2b0263c\") " pod="openshift-route-controller-manager/route-controller-manager-84d6978b89-fcwdd" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.124068 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53af46bb-34e6-406b-b2b2-6745d2b0263c-client-ca\") pod \"route-controller-manager-84d6978b89-fcwdd\" (UID: \"53af46bb-34e6-406b-b2b2-6745d2b0263c\") " pod="openshift-route-controller-manager/route-controller-manager-84d6978b89-fcwdd" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.124484 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/53af46bb-34e6-406b-b2b2-6745d2b0263c-tmp\") pod \"route-controller-manager-84d6978b89-fcwdd\" (UID: \"53af46bb-34e6-406b-b2b2-6745d2b0263c\") " pod="openshift-route-controller-manager/route-controller-manager-84d6978b89-fcwdd" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.128650 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53af46bb-34e6-406b-b2b2-6745d2b0263c-serving-cert\") pod \"route-controller-manager-84d6978b89-fcwdd\" (UID: \"53af46bb-34e6-406b-b2b2-6745d2b0263c\") " pod="openshift-route-controller-manager/route-controller-manager-84d6978b89-fcwdd" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.138773 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7jsk\" (UniqueName: \"kubernetes.io/projected/53af46bb-34e6-406b-b2b2-6745d2b0263c-kube-api-access-n7jsk\") pod \"route-controller-manager-84d6978b89-fcwdd\" (UID: \"53af46bb-34e6-406b-b2b2-6745d2b0263c\") " pod="openshift-route-controller-manager/route-controller-manager-84d6978b89-fcwdd" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.208009 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f94b4dd44-pxgvx" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.238590 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84d6978b89-fcwdd" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.469582 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f94b4dd44-pxgvx"] Feb 20 00:13:07 crc kubenswrapper[5107]: W0220 00:13:07.472393 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7968202_8763_48c2_88c6_b58c629a7e4b.slice/crio-9d77d179208cd9001e763f3225511ce4dfb882c434cf6929e6c3b6c66a21e1d4 WatchSource:0}: Error finding container 9d77d179208cd9001e763f3225511ce4dfb882c434cf6929e6c3b6c66a21e1d4: Status 404 returned error can't find the container with id 9d77d179208cd9001e763f3225511ce4dfb882c434cf6929e6c3b6c66a21e1d4 Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.524646 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84d6978b89-fcwdd"] Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.671962 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-v2992" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.671957 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-v2992" event={"ID":"51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1","Type":"ContainerDied","Data":"4178c8cd73a8649a74c25f691c2ad4d2a68e93e864248b68bc9f95ceaf5335a6"} Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.672132 5107 scope.go:117] "RemoveContainer" containerID="e2fb0426446fd01eb80ab0b591241e505cef70c2f4a7a78f5daad21aec586514" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.674467 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f94b4dd44-pxgvx" event={"ID":"d7968202-8763-48c2-88c6-b58c629a7e4b","Type":"ContainerStarted","Data":"0f62f08361e4706c46d6caea17d85d06b4eeb2d6dcf45d2c445586e50e5cd2ec"} Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.674528 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f94b4dd44-pxgvx" event={"ID":"d7968202-8763-48c2-88c6-b58c629a7e4b","Type":"ContainerStarted","Data":"9d77d179208cd9001e763f3225511ce4dfb882c434cf6929e6c3b6c66a21e1d4"} Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.675071 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-5f94b4dd44-pxgvx" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.676708 5107 patch_prober.go:28] interesting pod/controller-manager-5f94b4dd44-pxgvx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" start-of-body= Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.676751 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5f94b4dd44-pxgvx" podUID="d7968202-8763-48c2-88c6-b58c629a7e4b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.681299 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54cf5d9fd7-88vsn" event={"ID":"c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7","Type":"ContainerDied","Data":"fb9df849449df7a6ecd73bf248fc583062e33531f3ff7a3cd065f27ac17d8331"} Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.681432 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54cf5d9fd7-88vsn" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.682336 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84d6978b89-fcwdd" event={"ID":"53af46bb-34e6-406b-b2b2-6745d2b0263c","Type":"ContainerStarted","Data":"71f6abb788918f621ef092d48ecfc559d9a23d592170fb43c034a6c5a03a6bd7"} Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.689723 5107 scope.go:117] "RemoveContainer" containerID="2233a564007c02b03481c5846c9c6f8ecd7d9bc9425bac25c0022949445bd49d" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.694395 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5f94b4dd44-pxgvx" podStartSLOduration=1.69437532 podStartE2EDuration="1.69437532s" podCreationTimestamp="2026-02-20 00:13:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:13:07.69372633 +0000 UTC m=+274.062383896" watchObservedRunningTime="2026-02-20 00:13:07.69437532 +0000 UTC m=+274.063032886" Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.710117 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d96b79b56-v2992"] Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.715591 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d96b79b56-v2992"] Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.724502 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-54cf5d9fd7-88vsn"] Feb 20 00:13:07 crc kubenswrapper[5107]: I0220 00:13:07.728312 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-54cf5d9fd7-88vsn"] Feb 20 00:13:08 crc kubenswrapper[5107]: I0220 00:13:08.495305 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1" path="/var/lib/kubelet/pods/51b5dbd1-d2c9-4bb6-9e3f-7ba10409e3a1/volumes" Feb 20 00:13:08 crc kubenswrapper[5107]: I0220 00:13:08.496416 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7" path="/var/lib/kubelet/pods/c1840eb8-9ee5-41ad-8cf1-8e1a040cd6d7/volumes" Feb 20 00:13:08 crc kubenswrapper[5107]: I0220 00:13:08.701477 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84d6978b89-fcwdd" event={"ID":"53af46bb-34e6-406b-b2b2-6745d2b0263c","Type":"ContainerStarted","Data":"d4b532e33e715fb96ee7982c6905417d0d82c4c83d33156397252032a35c3870"} Feb 20 00:13:08 crc kubenswrapper[5107]: I0220 00:13:08.712452 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5f94b4dd44-pxgvx" Feb 20 00:13:08 crc kubenswrapper[5107]: I0220 00:13:08.731794 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-84d6978b89-fcwdd" podStartSLOduration=2.7317636 podStartE2EDuration="2.7317636s" podCreationTimestamp="2026-02-20 00:13:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:13:08.726885304 +0000 UTC m=+275.095542920" watchObservedRunningTime="2026-02-20 00:13:08.7317636 +0000 UTC m=+275.100421236" Feb 20 00:13:09 crc kubenswrapper[5107]: I0220 00:13:09.050788 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Feb 20 00:13:09 crc kubenswrapper[5107]: I0220 00:13:09.710470 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-84d6978b89-fcwdd" Feb 20 00:13:09 crc kubenswrapper[5107]: I0220 00:13:09.724213 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-84d6978b89-fcwdd" Feb 20 00:13:14 crc kubenswrapper[5107]: I0220 00:13:14.844858 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Feb 20 00:13:15 crc kubenswrapper[5107]: I0220 00:13:15.127113 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Feb 20 00:13:16 crc kubenswrapper[5107]: I0220 00:13:16.664435 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Feb 20 00:13:32 crc kubenswrapper[5107]: I0220 00:13:32.824078 5107 patch_prober.go:28] interesting pod/machine-config-daemon-5bqkx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:13:32 crc kubenswrapper[5107]: I0220 00:13:32.824600 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:13:32 crc kubenswrapper[5107]: I0220 00:13:32.824647 5107 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" Feb 20 00:13:32 crc kubenswrapper[5107]: I0220 00:13:32.825138 5107 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a2b0d29d657a8e1e523026507a935569b2cff249c2e3d5743b380396be4cd1c2"} pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 20 00:13:32 crc kubenswrapper[5107]: I0220 00:13:32.825207 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" containerName="machine-config-daemon" containerID="cri-o://a2b0d29d657a8e1e523026507a935569b2cff249c2e3d5743b380396be4cd1c2" gracePeriod=600 Feb 20 00:13:33 crc kubenswrapper[5107]: I0220 00:13:33.881767 5107 generic.go:358] "Generic (PLEG): container finished" podID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" containerID="a2b0d29d657a8e1e523026507a935569b2cff249c2e3d5743b380396be4cd1c2" exitCode=0 Feb 20 00:13:33 crc kubenswrapper[5107]: I0220 00:13:33.881924 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" event={"ID":"2a8cc693-438e-4d3b-8865-7d3907f9dc78","Type":"ContainerDied","Data":"a2b0d29d657a8e1e523026507a935569b2cff249c2e3d5743b380396be4cd1c2"} Feb 20 00:13:33 crc kubenswrapper[5107]: I0220 00:13:33.882785 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" event={"ID":"2a8cc693-438e-4d3b-8865-7d3907f9dc78","Type":"ContainerStarted","Data":"b9df3b3da41a36f1c87d440f95abb0b94cf412d0091dd42441acad625411180c"} Feb 20 00:13:34 crc kubenswrapper[5107]: I0220 00:13:34.707381 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 20 00:13:34 crc kubenswrapper[5107]: I0220 00:13:34.708974 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 20 00:13:48 crc kubenswrapper[5107]: I0220 00:13:48.252380 5107 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 20 00:13:58 crc kubenswrapper[5107]: I0220 00:13:58.973627 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rf4ps"] Feb 20 00:13:58 crc kubenswrapper[5107]: I0220 00:13:58.974506 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rf4ps" podUID="e148c20e-1d85-4049-b800-a0f1a42fd1ed" containerName="registry-server" containerID="cri-o://e24200a37f5d612ce08ac464ef8d0b4f3ca393ad814ec855360bd86585173186" gracePeriod=30 Feb 20 00:13:58 crc kubenswrapper[5107]: I0220 00:13:58.982033 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w2bt6"] Feb 20 00:13:58 crc kubenswrapper[5107]: I0220 00:13:58.982327 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w2bt6" podUID="873048c2-5622-40a5-be53-dbdbca3b95a7" containerName="registry-server" containerID="cri-o://7d5ad420e49354e2a1850815dd80fe500286af1d220dd8cfec505ec3676baa32" gracePeriod=30 Feb 20 00:13:58 crc kubenswrapper[5107]: I0220 00:13:58.990767 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-cvxll"] Feb 20 00:13:58 crc kubenswrapper[5107]: I0220 00:13:58.990984 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-cvxll" podUID="9833f13d-3814-43ad-afef-381d884e5950" containerName="marketplace-operator" containerID="cri-o://5ca2faac67bd9a95ba1787bff583b54185d856ea9b2065afffa7e47b185f6d5c" gracePeriod=30 Feb 20 00:13:59 crc kubenswrapper[5107]: I0220 00:13:59.003619 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rc7nq"] Feb 20 00:13:59 crc kubenswrapper[5107]: I0220 00:13:59.003891 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rc7nq" podUID="582a976b-611f-4153-9c08-eb9f343b290f" containerName="registry-server" containerID="cri-o://9a025670c3da79f3dc4e5ace412780ba57fb4632b43794f709f770b864f66566" gracePeriod=30 Feb 20 00:13:59 crc kubenswrapper[5107]: I0220 00:13:59.009874 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4kmt8"] Feb 20 00:13:59 crc kubenswrapper[5107]: I0220 00:13:59.010301 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4kmt8" podUID="9613fac6-e4cf-4553-b8a7-7b52986c7e27" containerName="registry-server" containerID="cri-o://85327b3fa52a2a7a4dfca8d471e54ecf75f9fdaee09ca81161b0f52b041c2c05" gracePeriod=30 Feb 20 00:13:59 crc kubenswrapper[5107]: I0220 00:13:59.020719 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-hngwn"] Feb 20 00:13:59 crc kubenswrapper[5107]: I0220 00:13:59.124667 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-hngwn"] Feb 20 00:13:59 crc kubenswrapper[5107]: I0220 00:13:59.124825 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-hngwn" Feb 20 00:13:59 crc kubenswrapper[5107]: I0220 00:13:59.158228 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/70393761-4fbb-4ab6-81c2-f0542f67f775-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-hngwn\" (UID: \"70393761-4fbb-4ab6-81c2-f0542f67f775\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-hngwn" Feb 20 00:13:59 crc kubenswrapper[5107]: I0220 00:13:59.158391 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/70393761-4fbb-4ab6-81c2-f0542f67f775-tmp\") pod \"marketplace-operator-547dbd544d-hngwn\" (UID: \"70393761-4fbb-4ab6-81c2-f0542f67f775\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-hngwn" Feb 20 00:13:59 crc kubenswrapper[5107]: I0220 00:13:59.158538 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f49z\" (UniqueName: \"kubernetes.io/projected/70393761-4fbb-4ab6-81c2-f0542f67f775-kube-api-access-2f49z\") pod \"marketplace-operator-547dbd544d-hngwn\" (UID: \"70393761-4fbb-4ab6-81c2-f0542f67f775\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-hngwn" Feb 20 00:13:59 crc kubenswrapper[5107]: I0220 00:13:59.158592 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/70393761-4fbb-4ab6-81c2-f0542f67f775-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-hngwn\" (UID: \"70393761-4fbb-4ab6-81c2-f0542f67f775\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-hngwn" Feb 20 00:13:59 crc kubenswrapper[5107]: I0220 00:13:59.260044 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/70393761-4fbb-4ab6-81c2-f0542f67f775-tmp\") pod \"marketplace-operator-547dbd544d-hngwn\" (UID: \"70393761-4fbb-4ab6-81c2-f0542f67f775\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-hngwn" Feb 20 00:13:59 crc kubenswrapper[5107]: I0220 00:13:59.260119 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2f49z\" (UniqueName: \"kubernetes.io/projected/70393761-4fbb-4ab6-81c2-f0542f67f775-kube-api-access-2f49z\") pod \"marketplace-operator-547dbd544d-hngwn\" (UID: \"70393761-4fbb-4ab6-81c2-f0542f67f775\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-hngwn" Feb 20 00:13:59 crc kubenswrapper[5107]: I0220 00:13:59.260160 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/70393761-4fbb-4ab6-81c2-f0542f67f775-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-hngwn\" (UID: \"70393761-4fbb-4ab6-81c2-f0542f67f775\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-hngwn" Feb 20 00:13:59 crc kubenswrapper[5107]: I0220 00:13:59.260183 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/70393761-4fbb-4ab6-81c2-f0542f67f775-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-hngwn\" (UID: \"70393761-4fbb-4ab6-81c2-f0542f67f775\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-hngwn" Feb 20 00:13:59 crc kubenswrapper[5107]: I0220 00:13:59.260860 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/70393761-4fbb-4ab6-81c2-f0542f67f775-tmp\") pod \"marketplace-operator-547dbd544d-hngwn\" (UID: \"70393761-4fbb-4ab6-81c2-f0542f67f775\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-hngwn" Feb 20 00:13:59 crc kubenswrapper[5107]: I0220 00:13:59.261348 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/70393761-4fbb-4ab6-81c2-f0542f67f775-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-hngwn\" (UID: \"70393761-4fbb-4ab6-81c2-f0542f67f775\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-hngwn" Feb 20 00:13:59 crc kubenswrapper[5107]: I0220 00:13:59.266497 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/70393761-4fbb-4ab6-81c2-f0542f67f775-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-hngwn\" (UID: \"70393761-4fbb-4ab6-81c2-f0542f67f775\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-hngwn" Feb 20 00:13:59 crc kubenswrapper[5107]: I0220 00:13:59.274264 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f49z\" (UniqueName: \"kubernetes.io/projected/70393761-4fbb-4ab6-81c2-f0542f67f775-kube-api-access-2f49z\") pod \"marketplace-operator-547dbd544d-hngwn\" (UID: \"70393761-4fbb-4ab6-81c2-f0542f67f775\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-hngwn" Feb 20 00:13:59 crc kubenswrapper[5107]: I0220 00:13:59.477795 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-hngwn" Feb 20 00:13:59 crc kubenswrapper[5107]: E0220 00:13:59.684971 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e24200a37f5d612ce08ac464ef8d0b4f3ca393ad814ec855360bd86585173186 is running failed: container process not found" containerID="e24200a37f5d612ce08ac464ef8d0b4f3ca393ad814ec855360bd86585173186" cmd=["grpc_health_probe","-addr=:50051"] Feb 20 00:13:59 crc kubenswrapper[5107]: E0220 00:13:59.687632 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e24200a37f5d612ce08ac464ef8d0b4f3ca393ad814ec855360bd86585173186 is running failed: container process not found" containerID="e24200a37f5d612ce08ac464ef8d0b4f3ca393ad814ec855360bd86585173186" cmd=["grpc_health_probe","-addr=:50051"] Feb 20 00:13:59 crc kubenswrapper[5107]: E0220 00:13:59.687808 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7d5ad420e49354e2a1850815dd80fe500286af1d220dd8cfec505ec3676baa32 is running failed: container process not found" containerID="7d5ad420e49354e2a1850815dd80fe500286af1d220dd8cfec505ec3676baa32" cmd=["grpc_health_probe","-addr=:50051"] Feb 20 00:13:59 crc kubenswrapper[5107]: E0220 00:13:59.688053 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e24200a37f5d612ce08ac464ef8d0b4f3ca393ad814ec855360bd86585173186 is running failed: container process not found" containerID="e24200a37f5d612ce08ac464ef8d0b4f3ca393ad814ec855360bd86585173186" cmd=["grpc_health_probe","-addr=:50051"] Feb 20 00:13:59 crc kubenswrapper[5107]: E0220 00:13:59.688082 5107 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e24200a37f5d612ce08ac464ef8d0b4f3ca393ad814ec855360bd86585173186 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-rf4ps" podUID="e148c20e-1d85-4049-b800-a0f1a42fd1ed" containerName="registry-server" probeResult="unknown" Feb 20 00:13:59 crc kubenswrapper[5107]: E0220 00:13:59.688384 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7d5ad420e49354e2a1850815dd80fe500286af1d220dd8cfec505ec3676baa32 is running failed: container process not found" containerID="7d5ad420e49354e2a1850815dd80fe500286af1d220dd8cfec505ec3676baa32" cmd=["grpc_health_probe","-addr=:50051"] Feb 20 00:13:59 crc kubenswrapper[5107]: E0220 00:13:59.688649 5107 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7d5ad420e49354e2a1850815dd80fe500286af1d220dd8cfec505ec3676baa32 is running failed: container process not found" containerID="7d5ad420e49354e2a1850815dd80fe500286af1d220dd8cfec505ec3676baa32" cmd=["grpc_health_probe","-addr=:50051"] Feb 20 00:13:59 crc kubenswrapper[5107]: E0220 00:13:59.688672 5107 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7d5ad420e49354e2a1850815dd80fe500286af1d220dd8cfec505ec3676baa32 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-w2bt6" podUID="873048c2-5622-40a5-be53-dbdbca3b95a7" containerName="registry-server" probeResult="unknown" Feb 20 00:13:59 crc kubenswrapper[5107]: I0220 00:13:59.948467 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rf4ps" Feb 20 00:13:59 crc kubenswrapper[5107]: I0220 00:13:59.972594 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e148c20e-1d85-4049-b800-a0f1a42fd1ed-catalog-content\") pod \"e148c20e-1d85-4049-b800-a0f1a42fd1ed\" (UID: \"e148c20e-1d85-4049-b800-a0f1a42fd1ed\") " Feb 20 00:13:59 crc kubenswrapper[5107]: I0220 00:13:59.972634 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rptbf\" (UniqueName: \"kubernetes.io/projected/e148c20e-1d85-4049-b800-a0f1a42fd1ed-kube-api-access-rptbf\") pod \"e148c20e-1d85-4049-b800-a0f1a42fd1ed\" (UID: \"e148c20e-1d85-4049-b800-a0f1a42fd1ed\") " Feb 20 00:13:59 crc kubenswrapper[5107]: I0220 00:13:59.972768 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e148c20e-1d85-4049-b800-a0f1a42fd1ed-utilities\") pod \"e148c20e-1d85-4049-b800-a0f1a42fd1ed\" (UID: \"e148c20e-1d85-4049-b800-a0f1a42fd1ed\") " Feb 20 00:13:59 crc kubenswrapper[5107]: I0220 00:13:59.973909 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e148c20e-1d85-4049-b800-a0f1a42fd1ed-utilities" (OuterVolumeSpecName: "utilities") pod "e148c20e-1d85-4049-b800-a0f1a42fd1ed" (UID: "e148c20e-1d85-4049-b800-a0f1a42fd1ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:13:59 crc kubenswrapper[5107]: I0220 00:13:59.984488 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e148c20e-1d85-4049-b800-a0f1a42fd1ed-kube-api-access-rptbf" (OuterVolumeSpecName: "kube-api-access-rptbf") pod "e148c20e-1d85-4049-b800-a0f1a42fd1ed" (UID: "e148c20e-1d85-4049-b800-a0f1a42fd1ed"). InnerVolumeSpecName "kube-api-access-rptbf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:13:59 crc kubenswrapper[5107]: I0220 00:13:59.992204 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-hngwn"] Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.009500 5107 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.021760 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e148c20e-1d85-4049-b800-a0f1a42fd1ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e148c20e-1d85-4049-b800-a0f1a42fd1ed" (UID: "e148c20e-1d85-4049-b800-a0f1a42fd1ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.074093 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e148c20e-1d85-4049-b800-a0f1a42fd1ed-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.074124 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rptbf\" (UniqueName: \"kubernetes.io/projected/e148c20e-1d85-4049-b800-a0f1a42fd1ed-kube-api-access-rptbf\") on node \"crc\" DevicePath \"\"" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.074136 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e148c20e-1d85-4049-b800-a0f1a42fd1ed-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.096218 5107 generic.go:358] "Generic (PLEG): container finished" podID="9833f13d-3814-43ad-afef-381d884e5950" containerID="5ca2faac67bd9a95ba1787bff583b54185d856ea9b2065afffa7e47b185f6d5c" exitCode=0 Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.096351 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-cvxll" event={"ID":"9833f13d-3814-43ad-afef-381d884e5950","Type":"ContainerDied","Data":"5ca2faac67bd9a95ba1787bff583b54185d856ea9b2065afffa7e47b185f6d5c"} Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.096420 5107 scope.go:117] "RemoveContainer" containerID="3e90a833ccabef84d6f8a414df95fab479752a61eaa0f099c7b823171c0aa9d8" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.098535 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-hngwn" event={"ID":"70393761-4fbb-4ab6-81c2-f0542f67f775","Type":"ContainerStarted","Data":"6dec05baefe7f841f0a482710e5bf56152f6f14ca665a52a0ee6db1c6d7c2ea8"} Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.101070 5107 generic.go:358] "Generic (PLEG): container finished" podID="e148c20e-1d85-4049-b800-a0f1a42fd1ed" containerID="e24200a37f5d612ce08ac464ef8d0b4f3ca393ad814ec855360bd86585173186" exitCode=0 Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.101183 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rf4ps" event={"ID":"e148c20e-1d85-4049-b800-a0f1a42fd1ed","Type":"ContainerDied","Data":"e24200a37f5d612ce08ac464ef8d0b4f3ca393ad814ec855360bd86585173186"} Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.101215 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rf4ps" event={"ID":"e148c20e-1d85-4049-b800-a0f1a42fd1ed","Type":"ContainerDied","Data":"e1bfdb5e50566f257b7be4abbb352ab03391524d3c83f35a2bc1e0e3e539c428"} Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.101312 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rf4ps" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.113042 5107 generic.go:358] "Generic (PLEG): container finished" podID="9613fac6-e4cf-4553-b8a7-7b52986c7e27" containerID="85327b3fa52a2a7a4dfca8d471e54ecf75f9fdaee09ca81161b0f52b041c2c05" exitCode=0 Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.113201 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4kmt8" event={"ID":"9613fac6-e4cf-4553-b8a7-7b52986c7e27","Type":"ContainerDied","Data":"85327b3fa52a2a7a4dfca8d471e54ecf75f9fdaee09ca81161b0f52b041c2c05"} Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.116299 5107 generic.go:358] "Generic (PLEG): container finished" podID="582a976b-611f-4153-9c08-eb9f343b290f" containerID="9a025670c3da79f3dc4e5ace412780ba57fb4632b43794f709f770b864f66566" exitCode=0 Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.116388 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rc7nq" event={"ID":"582a976b-611f-4153-9c08-eb9f343b290f","Type":"ContainerDied","Data":"9a025670c3da79f3dc4e5ace412780ba57fb4632b43794f709f770b864f66566"} Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.118351 5107 generic.go:358] "Generic (PLEG): container finished" podID="873048c2-5622-40a5-be53-dbdbca3b95a7" containerID="7d5ad420e49354e2a1850815dd80fe500286af1d220dd8cfec505ec3676baa32" exitCode=0 Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.118453 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2bt6" event={"ID":"873048c2-5622-40a5-be53-dbdbca3b95a7","Type":"ContainerDied","Data":"7d5ad420e49354e2a1850815dd80fe500286af1d220dd8cfec505ec3676baa32"} Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.187693 5107 scope.go:117] "RemoveContainer" containerID="e24200a37f5d612ce08ac464ef8d0b4f3ca393ad814ec855360bd86585173186" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.198890 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rc7nq" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.212905 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rf4ps"] Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.228441 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rf4ps"] Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.231294 5107 scope.go:117] "RemoveContainer" containerID="a857daa07c1f6658ceabbe65eb5c58ded337dbd1371e3f9687a5cd7820102a9d" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.232319 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2bt6" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.236477 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4kmt8" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.248297 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-cvxll" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.256282 5107 scope.go:117] "RemoveContainer" containerID="7dce2d6b3c5ee386acca7820eba4bbfd624b6ac626264eca51f4d4a64a5d248b" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.277978 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9833f13d-3814-43ad-afef-381d884e5950-tmp\") pod \"9833f13d-3814-43ad-afef-381d884e5950\" (UID: \"9833f13d-3814-43ad-afef-381d884e5950\") " Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.278016 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9833f13d-3814-43ad-afef-381d884e5950-marketplace-trusted-ca\") pod \"9833f13d-3814-43ad-afef-381d884e5950\" (UID: \"9833f13d-3814-43ad-afef-381d884e5950\") " Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.278043 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9833f13d-3814-43ad-afef-381d884e5950-marketplace-operator-metrics\") pod \"9833f13d-3814-43ad-afef-381d884e5950\" (UID: \"9833f13d-3814-43ad-afef-381d884e5950\") " Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.278071 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/582a976b-611f-4153-9c08-eb9f343b290f-catalog-content\") pod \"582a976b-611f-4153-9c08-eb9f343b290f\" (UID: \"582a976b-611f-4153-9c08-eb9f343b290f\") " Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.278093 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/873048c2-5622-40a5-be53-dbdbca3b95a7-utilities\") pod \"873048c2-5622-40a5-be53-dbdbca3b95a7\" (UID: \"873048c2-5622-40a5-be53-dbdbca3b95a7\") " Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.278125 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9613fac6-e4cf-4553-b8a7-7b52986c7e27-catalog-content\") pod \"9613fac6-e4cf-4553-b8a7-7b52986c7e27\" (UID: \"9613fac6-e4cf-4553-b8a7-7b52986c7e27\") " Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.278190 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9613fac6-e4cf-4553-b8a7-7b52986c7e27-utilities\") pod \"9613fac6-e4cf-4553-b8a7-7b52986c7e27\" (UID: \"9613fac6-e4cf-4553-b8a7-7b52986c7e27\") " Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.278231 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpnqd\" (UniqueName: \"kubernetes.io/projected/9613fac6-e4cf-4553-b8a7-7b52986c7e27-kube-api-access-hpnqd\") pod \"9613fac6-e4cf-4553-b8a7-7b52986c7e27\" (UID: \"9613fac6-e4cf-4553-b8a7-7b52986c7e27\") " Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.278265 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szh8x\" (UniqueName: \"kubernetes.io/projected/582a976b-611f-4153-9c08-eb9f343b290f-kube-api-access-szh8x\") pod \"582a976b-611f-4153-9c08-eb9f343b290f\" (UID: \"582a976b-611f-4153-9c08-eb9f343b290f\") " Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.278317 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/873048c2-5622-40a5-be53-dbdbca3b95a7-catalog-content\") pod \"873048c2-5622-40a5-be53-dbdbca3b95a7\" (UID: \"873048c2-5622-40a5-be53-dbdbca3b95a7\") " Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.278357 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/582a976b-611f-4153-9c08-eb9f343b290f-utilities\") pod \"582a976b-611f-4153-9c08-eb9f343b290f\" (UID: \"582a976b-611f-4153-9c08-eb9f343b290f\") " Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.278430 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frcsk\" (UniqueName: \"kubernetes.io/projected/9833f13d-3814-43ad-afef-381d884e5950-kube-api-access-frcsk\") pod \"9833f13d-3814-43ad-afef-381d884e5950\" (UID: \"9833f13d-3814-43ad-afef-381d884e5950\") " Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.278447 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5lqf\" (UniqueName: \"kubernetes.io/projected/873048c2-5622-40a5-be53-dbdbca3b95a7-kube-api-access-t5lqf\") pod \"873048c2-5622-40a5-be53-dbdbca3b95a7\" (UID: \"873048c2-5622-40a5-be53-dbdbca3b95a7\") " Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.280324 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9833f13d-3814-43ad-afef-381d884e5950-tmp" (OuterVolumeSpecName: "tmp") pod "9833f13d-3814-43ad-afef-381d884e5950" (UID: "9833f13d-3814-43ad-afef-381d884e5950"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.280815 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9613fac6-e4cf-4553-b8a7-7b52986c7e27-utilities" (OuterVolumeSpecName: "utilities") pod "9613fac6-e4cf-4553-b8a7-7b52986c7e27" (UID: "9613fac6-e4cf-4553-b8a7-7b52986c7e27"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.280997 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9833f13d-3814-43ad-afef-381d884e5950-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "9833f13d-3814-43ad-afef-381d884e5950" (UID: "9833f13d-3814-43ad-afef-381d884e5950"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.281109 5107 scope.go:117] "RemoveContainer" containerID="e24200a37f5d612ce08ac464ef8d0b4f3ca393ad814ec855360bd86585173186" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.282452 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/873048c2-5622-40a5-be53-dbdbca3b95a7-utilities" (OuterVolumeSpecName: "utilities") pod "873048c2-5622-40a5-be53-dbdbca3b95a7" (UID: "873048c2-5622-40a5-be53-dbdbca3b95a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:14:00 crc kubenswrapper[5107]: E0220 00:14:00.282552 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e24200a37f5d612ce08ac464ef8d0b4f3ca393ad814ec855360bd86585173186\": container with ID starting with e24200a37f5d612ce08ac464ef8d0b4f3ca393ad814ec855360bd86585173186 not found: ID does not exist" containerID="e24200a37f5d612ce08ac464ef8d0b4f3ca393ad814ec855360bd86585173186" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.282585 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e24200a37f5d612ce08ac464ef8d0b4f3ca393ad814ec855360bd86585173186"} err="failed to get container status \"e24200a37f5d612ce08ac464ef8d0b4f3ca393ad814ec855360bd86585173186\": rpc error: code = NotFound desc = could not find container \"e24200a37f5d612ce08ac464ef8d0b4f3ca393ad814ec855360bd86585173186\": container with ID starting with e24200a37f5d612ce08ac464ef8d0b4f3ca393ad814ec855360bd86585173186 not found: ID does not exist" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.282608 5107 scope.go:117] "RemoveContainer" containerID="a857daa07c1f6658ceabbe65eb5c58ded337dbd1371e3f9687a5cd7820102a9d" Feb 20 00:14:00 crc kubenswrapper[5107]: E0220 00:14:00.284430 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a857daa07c1f6658ceabbe65eb5c58ded337dbd1371e3f9687a5cd7820102a9d\": container with ID starting with a857daa07c1f6658ceabbe65eb5c58ded337dbd1371e3f9687a5cd7820102a9d not found: ID does not exist" containerID="a857daa07c1f6658ceabbe65eb5c58ded337dbd1371e3f9687a5cd7820102a9d" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.284466 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a857daa07c1f6658ceabbe65eb5c58ded337dbd1371e3f9687a5cd7820102a9d"} err="failed to get container status \"a857daa07c1f6658ceabbe65eb5c58ded337dbd1371e3f9687a5cd7820102a9d\": rpc error: code = NotFound desc = could not find container \"a857daa07c1f6658ceabbe65eb5c58ded337dbd1371e3f9687a5cd7820102a9d\": container with ID starting with a857daa07c1f6658ceabbe65eb5c58ded337dbd1371e3f9687a5cd7820102a9d not found: ID does not exist" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.284482 5107 scope.go:117] "RemoveContainer" containerID="7dce2d6b3c5ee386acca7820eba4bbfd624b6ac626264eca51f4d4a64a5d248b" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.284342 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9833f13d-3814-43ad-afef-381d884e5950-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "9833f13d-3814-43ad-afef-381d884e5950" (UID: "9833f13d-3814-43ad-afef-381d884e5950"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:14:00 crc kubenswrapper[5107]: E0220 00:14:00.284997 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dce2d6b3c5ee386acca7820eba4bbfd624b6ac626264eca51f4d4a64a5d248b\": container with ID starting with 7dce2d6b3c5ee386acca7820eba4bbfd624b6ac626264eca51f4d4a64a5d248b not found: ID does not exist" containerID="7dce2d6b3c5ee386acca7820eba4bbfd624b6ac626264eca51f4d4a64a5d248b" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.285056 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dce2d6b3c5ee386acca7820eba4bbfd624b6ac626264eca51f4d4a64a5d248b"} err="failed to get container status \"7dce2d6b3c5ee386acca7820eba4bbfd624b6ac626264eca51f4d4a64a5d248b\": rpc error: code = NotFound desc = could not find container \"7dce2d6b3c5ee386acca7820eba4bbfd624b6ac626264eca51f4d4a64a5d248b\": container with ID starting with 7dce2d6b3c5ee386acca7820eba4bbfd624b6ac626264eca51f4d4a64a5d248b not found: ID does not exist" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.285321 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/582a976b-611f-4153-9c08-eb9f343b290f-utilities" (OuterVolumeSpecName: "utilities") pod "582a976b-611f-4153-9c08-eb9f343b290f" (UID: "582a976b-611f-4153-9c08-eb9f343b290f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.286847 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9613fac6-e4cf-4553-b8a7-7b52986c7e27-kube-api-access-hpnqd" (OuterVolumeSpecName: "kube-api-access-hpnqd") pod "9613fac6-e4cf-4553-b8a7-7b52986c7e27" (UID: "9613fac6-e4cf-4553-b8a7-7b52986c7e27"). InnerVolumeSpecName "kube-api-access-hpnqd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.287259 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/873048c2-5622-40a5-be53-dbdbca3b95a7-kube-api-access-t5lqf" (OuterVolumeSpecName: "kube-api-access-t5lqf") pod "873048c2-5622-40a5-be53-dbdbca3b95a7" (UID: "873048c2-5622-40a5-be53-dbdbca3b95a7"). InnerVolumeSpecName "kube-api-access-t5lqf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.287744 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/582a976b-611f-4153-9c08-eb9f343b290f-kube-api-access-szh8x" (OuterVolumeSpecName: "kube-api-access-szh8x") pod "582a976b-611f-4153-9c08-eb9f343b290f" (UID: "582a976b-611f-4153-9c08-eb9f343b290f"). InnerVolumeSpecName "kube-api-access-szh8x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.288075 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9833f13d-3814-43ad-afef-381d884e5950-kube-api-access-frcsk" (OuterVolumeSpecName: "kube-api-access-frcsk") pod "9833f13d-3814-43ad-afef-381d884e5950" (UID: "9833f13d-3814-43ad-afef-381d884e5950"). InnerVolumeSpecName "kube-api-access-frcsk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.311051 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/582a976b-611f-4153-9c08-eb9f343b290f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "582a976b-611f-4153-9c08-eb9f343b290f" (UID: "582a976b-611f-4153-9c08-eb9f343b290f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.380134 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-frcsk\" (UniqueName: \"kubernetes.io/projected/9833f13d-3814-43ad-afef-381d884e5950-kube-api-access-frcsk\") on node \"crc\" DevicePath \"\"" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.380171 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t5lqf\" (UniqueName: \"kubernetes.io/projected/873048c2-5622-40a5-be53-dbdbca3b95a7-kube-api-access-t5lqf\") on node \"crc\" DevicePath \"\"" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.380180 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9833f13d-3814-43ad-afef-381d884e5950-tmp\") on node \"crc\" DevicePath \"\"" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.380190 5107 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9833f13d-3814-43ad-afef-381d884e5950-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.380198 5107 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9833f13d-3814-43ad-afef-381d884e5950-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.380208 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/582a976b-611f-4153-9c08-eb9f343b290f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.380215 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/873048c2-5622-40a5-be53-dbdbca3b95a7-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.380223 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9613fac6-e4cf-4553-b8a7-7b52986c7e27-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.380231 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hpnqd\" (UniqueName: \"kubernetes.io/projected/9613fac6-e4cf-4553-b8a7-7b52986c7e27-kube-api-access-hpnqd\") on node \"crc\" DevicePath \"\"" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.380240 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-szh8x\" (UniqueName: \"kubernetes.io/projected/582a976b-611f-4153-9c08-eb9f343b290f-kube-api-access-szh8x\") on node \"crc\" DevicePath \"\"" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.380247 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/582a976b-611f-4153-9c08-eb9f343b290f-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.406913 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/873048c2-5622-40a5-be53-dbdbca3b95a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "873048c2-5622-40a5-be53-dbdbca3b95a7" (UID: "873048c2-5622-40a5-be53-dbdbca3b95a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.439518 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9613fac6-e4cf-4553-b8a7-7b52986c7e27-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9613fac6-e4cf-4553-b8a7-7b52986c7e27" (UID: "9613fac6-e4cf-4553-b8a7-7b52986c7e27"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.481592 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9613fac6-e4cf-4553-b8a7-7b52986c7e27-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.481830 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/873048c2-5622-40a5-be53-dbdbca3b95a7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:14:00 crc kubenswrapper[5107]: I0220 00:14:00.492214 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e148c20e-1d85-4049-b800-a0f1a42fd1ed" path="/var/lib/kubelet/pods/e148c20e-1d85-4049-b800-a0f1a42fd1ed/volumes" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.072686 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8d86r"] Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.073583 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="582a976b-611f-4153-9c08-eb9f343b290f" containerName="extract-utilities" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.073597 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="582a976b-611f-4153-9c08-eb9f343b290f" containerName="extract-utilities" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.073608 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9613fac6-e4cf-4553-b8a7-7b52986c7e27" containerName="extract-utilities" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.073614 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="9613fac6-e4cf-4553-b8a7-7b52986c7e27" containerName="extract-utilities" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.073622 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e148c20e-1d85-4049-b800-a0f1a42fd1ed" containerName="extract-content" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.073627 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="e148c20e-1d85-4049-b800-a0f1a42fd1ed" containerName="extract-content" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.073644 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9613fac6-e4cf-4553-b8a7-7b52986c7e27" containerName="extract-content" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.073649 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="9613fac6-e4cf-4553-b8a7-7b52986c7e27" containerName="extract-content" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.073654 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9833f13d-3814-43ad-afef-381d884e5950" containerName="marketplace-operator" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.073685 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="9833f13d-3814-43ad-afef-381d884e5950" containerName="marketplace-operator" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.073693 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e148c20e-1d85-4049-b800-a0f1a42fd1ed" containerName="extract-utilities" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.073698 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="e148c20e-1d85-4049-b800-a0f1a42fd1ed" containerName="extract-utilities" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.073704 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e148c20e-1d85-4049-b800-a0f1a42fd1ed" containerName="registry-server" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.073709 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="e148c20e-1d85-4049-b800-a0f1a42fd1ed" containerName="registry-server" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.073716 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="873048c2-5622-40a5-be53-dbdbca3b95a7" containerName="registry-server" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.073721 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="873048c2-5622-40a5-be53-dbdbca3b95a7" containerName="registry-server" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.073730 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9833f13d-3814-43ad-afef-381d884e5950" containerName="marketplace-operator" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.073735 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="9833f13d-3814-43ad-afef-381d884e5950" containerName="marketplace-operator" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.073741 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="582a976b-611f-4153-9c08-eb9f343b290f" containerName="registry-server" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.073746 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="582a976b-611f-4153-9c08-eb9f343b290f" containerName="registry-server" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.073760 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="873048c2-5622-40a5-be53-dbdbca3b95a7" containerName="extract-content" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.073765 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="873048c2-5622-40a5-be53-dbdbca3b95a7" containerName="extract-content" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.073772 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9613fac6-e4cf-4553-b8a7-7b52986c7e27" containerName="registry-server" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.073777 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="9613fac6-e4cf-4553-b8a7-7b52986c7e27" containerName="registry-server" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.073785 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="873048c2-5622-40a5-be53-dbdbca3b95a7" containerName="extract-utilities" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.073790 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="873048c2-5622-40a5-be53-dbdbca3b95a7" containerName="extract-utilities" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.073797 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="582a976b-611f-4153-9c08-eb9f343b290f" containerName="extract-content" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.073802 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="582a976b-611f-4153-9c08-eb9f343b290f" containerName="extract-content" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.073879 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="873048c2-5622-40a5-be53-dbdbca3b95a7" containerName="registry-server" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.073892 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="9833f13d-3814-43ad-afef-381d884e5950" containerName="marketplace-operator" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.073899 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="9833f13d-3814-43ad-afef-381d884e5950" containerName="marketplace-operator" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.073907 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="9613fac6-e4cf-4553-b8a7-7b52986c7e27" containerName="registry-server" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.073916 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="582a976b-611f-4153-9c08-eb9f343b290f" containerName="registry-server" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.073923 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="e148c20e-1d85-4049-b800-a0f1a42fd1ed" containerName="registry-server" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.101003 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8d86r" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.103900 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.107083 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8d86r"] Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.128206 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-cvxll" event={"ID":"9833f13d-3814-43ad-afef-381d884e5950","Type":"ContainerDied","Data":"6b67bc5e444b39a355618aefdb1560c73a260c1c7fddf2f229dec6d12d4c39c2"} Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.128265 5107 scope.go:117] "RemoveContainer" containerID="5ca2faac67bd9a95ba1787bff583b54185d856ea9b2065afffa7e47b185f6d5c" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.128474 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-cvxll" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.137784 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-hngwn" event={"ID":"70393761-4fbb-4ab6-81c2-f0542f67f775","Type":"ContainerStarted","Data":"5e7c35264d46002aecc10640023320afd31fa6c5b6b70aaa620bdd35ff63dee8"} Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.137995 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-hngwn" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.145009 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-hngwn" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.153026 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4kmt8" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.153013 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4kmt8" event={"ID":"9613fac6-e4cf-4553-b8a7-7b52986c7e27","Type":"ContainerDied","Data":"92e56d2e76573fe7f7a622e58c493ba44db4f796bad0931d1c522a62f6606ccc"} Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.155684 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-cvxll"] Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.156705 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rc7nq" event={"ID":"582a976b-611f-4153-9c08-eb9f343b290f","Type":"ContainerDied","Data":"9b495beaad35c6babc007a8524700bd99e84636c47d00db521375b29a5041190"} Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.156826 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rc7nq" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.159423 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2bt6" event={"ID":"873048c2-5622-40a5-be53-dbdbca3b95a7","Type":"ContainerDied","Data":"f2d065b47afe136e99d366fc34909bd8e284255d1eb57f097e0b816b93bdc190"} Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.159615 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2bt6" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.161480 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-cvxll"] Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.164252 5107 scope.go:117] "RemoveContainer" containerID="85327b3fa52a2a7a4dfca8d471e54ecf75f9fdaee09ca81161b0f52b041c2c05" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.173040 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-hngwn" podStartSLOduration=2.173022727 podStartE2EDuration="2.173022727s" podCreationTimestamp="2026-02-20 00:13:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:14:01.169505987 +0000 UTC m=+327.538163553" watchObservedRunningTime="2026-02-20 00:14:01.173022727 +0000 UTC m=+327.541680293" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.185559 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w2bt6"] Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.190940 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w2bt6"] Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.195043 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c874fdaa-8bc3-4b35-b322-420aca76db11-utilities\") pod \"certified-operators-8d86r\" (UID: \"c874fdaa-8bc3-4b35-b322-420aca76db11\") " pod="openshift-marketplace/certified-operators-8d86r" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.195098 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c874fdaa-8bc3-4b35-b322-420aca76db11-catalog-content\") pod \"certified-operators-8d86r\" (UID: \"c874fdaa-8bc3-4b35-b322-420aca76db11\") " pod="openshift-marketplace/certified-operators-8d86r" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.195166 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5kpc\" (UniqueName: \"kubernetes.io/projected/c874fdaa-8bc3-4b35-b322-420aca76db11-kube-api-access-h5kpc\") pod \"certified-operators-8d86r\" (UID: \"c874fdaa-8bc3-4b35-b322-420aca76db11\") " pod="openshift-marketplace/certified-operators-8d86r" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.199926 5107 scope.go:117] "RemoveContainer" containerID="5ff035e9fb87dfbc120765af007d29f5902cf09db582fde758355626d8d3fcb4" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.206468 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4kmt8"] Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.224158 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4kmt8"] Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.226283 5107 scope.go:117] "RemoveContainer" containerID="c6ff514936c52b8d3bba5332fc980553f3c00d0248bdd1d0d1829b07e0b52be6" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.229620 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rc7nq"] Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.234043 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rc7nq"] Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.238399 5107 scope.go:117] "RemoveContainer" containerID="9a025670c3da79f3dc4e5ace412780ba57fb4632b43794f709f770b864f66566" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.252333 5107 scope.go:117] "RemoveContainer" containerID="9e4f009a603aa27099856ec95dbc0e4a5c64e68c9f2a5c2059cb7f3563cdf21d" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.265685 5107 scope.go:117] "RemoveContainer" containerID="b0b3c3dbdfe7075d03727e8600a0ce01f99e3c28635d361cdce385b77ebd2553" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.280257 5107 scope.go:117] "RemoveContainer" containerID="7d5ad420e49354e2a1850815dd80fe500286af1d220dd8cfec505ec3676baa32" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.296821 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c874fdaa-8bc3-4b35-b322-420aca76db11-utilities\") pod \"certified-operators-8d86r\" (UID: \"c874fdaa-8bc3-4b35-b322-420aca76db11\") " pod="openshift-marketplace/certified-operators-8d86r" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.296859 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c874fdaa-8bc3-4b35-b322-420aca76db11-catalog-content\") pod \"certified-operators-8d86r\" (UID: \"c874fdaa-8bc3-4b35-b322-420aca76db11\") " pod="openshift-marketplace/certified-operators-8d86r" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.296899 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h5kpc\" (UniqueName: \"kubernetes.io/projected/c874fdaa-8bc3-4b35-b322-420aca76db11-kube-api-access-h5kpc\") pod \"certified-operators-8d86r\" (UID: \"c874fdaa-8bc3-4b35-b322-420aca76db11\") " pod="openshift-marketplace/certified-operators-8d86r" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.297349 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c874fdaa-8bc3-4b35-b322-420aca76db11-catalog-content\") pod \"certified-operators-8d86r\" (UID: \"c874fdaa-8bc3-4b35-b322-420aca76db11\") " pod="openshift-marketplace/certified-operators-8d86r" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.297420 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c874fdaa-8bc3-4b35-b322-420aca76db11-utilities\") pod \"certified-operators-8d86r\" (UID: \"c874fdaa-8bc3-4b35-b322-420aca76db11\") " pod="openshift-marketplace/certified-operators-8d86r" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.299206 5107 scope.go:117] "RemoveContainer" containerID="95857bd6374c565c234da2e7ce3eb9c45491ca331ca04a200cfac36f782b14df" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.316853 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5kpc\" (UniqueName: \"kubernetes.io/projected/c874fdaa-8bc3-4b35-b322-420aca76db11-kube-api-access-h5kpc\") pod \"certified-operators-8d86r\" (UID: \"c874fdaa-8bc3-4b35-b322-420aca76db11\") " pod="openshift-marketplace/certified-operators-8d86r" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.325006 5107 scope.go:117] "RemoveContainer" containerID="63ae451af7e325f883dfbd812995a085d5541c899c93f6daf6382248d049123c" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.421120 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8d86r" Feb 20 00:14:01 crc kubenswrapper[5107]: I0220 00:14:01.656803 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8d86r"] Feb 20 00:14:01 crc kubenswrapper[5107]: W0220 00:14:01.661036 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc874fdaa_8bc3_4b35_b322_420aca76db11.slice/crio-c49ab56c148430f3a5d8c8ddd1b2b638ce2d81ba658ccd6fc4954479bc2c8595 WatchSource:0}: Error finding container c49ab56c148430f3a5d8c8ddd1b2b638ce2d81ba658ccd6fc4954479bc2c8595: Status 404 returned error can't find the container with id c49ab56c148430f3a5d8c8ddd1b2b638ce2d81ba658ccd6fc4954479bc2c8595 Feb 20 00:14:02 crc kubenswrapper[5107]: I0220 00:14:02.169508 5107 generic.go:358] "Generic (PLEG): container finished" podID="c874fdaa-8bc3-4b35-b322-420aca76db11" containerID="c5ca3003feb6500118002f94c568b9384b3add552e82f84c5842a4ab3486c829" exitCode=0 Feb 20 00:14:02 crc kubenswrapper[5107]: I0220 00:14:02.169566 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8d86r" event={"ID":"c874fdaa-8bc3-4b35-b322-420aca76db11","Type":"ContainerDied","Data":"c5ca3003feb6500118002f94c568b9384b3add552e82f84c5842a4ab3486c829"} Feb 20 00:14:02 crc kubenswrapper[5107]: I0220 00:14:02.169952 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8d86r" event={"ID":"c874fdaa-8bc3-4b35-b322-420aca76db11","Type":"ContainerStarted","Data":"c49ab56c148430f3a5d8c8ddd1b2b638ce2d81ba658ccd6fc4954479bc2c8595"} Feb 20 00:14:02 crc kubenswrapper[5107]: I0220 00:14:02.507493 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="582a976b-611f-4153-9c08-eb9f343b290f" path="/var/lib/kubelet/pods/582a976b-611f-4153-9c08-eb9f343b290f/volumes" Feb 20 00:14:02 crc kubenswrapper[5107]: I0220 00:14:02.508877 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="873048c2-5622-40a5-be53-dbdbca3b95a7" path="/var/lib/kubelet/pods/873048c2-5622-40a5-be53-dbdbca3b95a7/volumes" Feb 20 00:14:02 crc kubenswrapper[5107]: I0220 00:14:02.510297 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9613fac6-e4cf-4553-b8a7-7b52986c7e27" path="/var/lib/kubelet/pods/9613fac6-e4cf-4553-b8a7-7b52986c7e27/volumes" Feb 20 00:14:02 crc kubenswrapper[5107]: I0220 00:14:02.511457 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9833f13d-3814-43ad-afef-381d884e5950" path="/var/lib/kubelet/pods/9833f13d-3814-43ad-afef-381d884e5950/volumes" Feb 20 00:14:03 crc kubenswrapper[5107]: I0220 00:14:03.180468 5107 generic.go:358] "Generic (PLEG): container finished" podID="c874fdaa-8bc3-4b35-b322-420aca76db11" containerID="8fa3febb757a1ab52b341990c65c5b326477d7044d149e39e9a6ffbf6092194d" exitCode=0 Feb 20 00:14:03 crc kubenswrapper[5107]: I0220 00:14:03.180519 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8d86r" event={"ID":"c874fdaa-8bc3-4b35-b322-420aca76db11","Type":"ContainerDied","Data":"8fa3febb757a1ab52b341990c65c5b326477d7044d149e39e9a6ffbf6092194d"} Feb 20 00:14:03 crc kubenswrapper[5107]: I0220 00:14:03.270798 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5shbf"] Feb 20 00:14:03 crc kubenswrapper[5107]: I0220 00:14:03.292257 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5shbf"] Feb 20 00:14:03 crc kubenswrapper[5107]: I0220 00:14:03.292380 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5shbf" Feb 20 00:14:03 crc kubenswrapper[5107]: I0220 00:14:03.294916 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Feb 20 00:14:03 crc kubenswrapper[5107]: I0220 00:14:03.320415 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10fea87c-1407-4938-96ac-727dfe224f63-utilities\") pod \"redhat-marketplace-5shbf\" (UID: \"10fea87c-1407-4938-96ac-727dfe224f63\") " pod="openshift-marketplace/redhat-marketplace-5shbf" Feb 20 00:14:03 crc kubenswrapper[5107]: I0220 00:14:03.320910 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10fea87c-1407-4938-96ac-727dfe224f63-catalog-content\") pod \"redhat-marketplace-5shbf\" (UID: \"10fea87c-1407-4938-96ac-727dfe224f63\") " pod="openshift-marketplace/redhat-marketplace-5shbf" Feb 20 00:14:03 crc kubenswrapper[5107]: I0220 00:14:03.321211 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82kpt\" (UniqueName: \"kubernetes.io/projected/10fea87c-1407-4938-96ac-727dfe224f63-kube-api-access-82kpt\") pod \"redhat-marketplace-5shbf\" (UID: \"10fea87c-1407-4938-96ac-727dfe224f63\") " pod="openshift-marketplace/redhat-marketplace-5shbf" Feb 20 00:14:03 crc kubenswrapper[5107]: I0220 00:14:03.423670 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10fea87c-1407-4938-96ac-727dfe224f63-catalog-content\") pod \"redhat-marketplace-5shbf\" (UID: \"10fea87c-1407-4938-96ac-727dfe224f63\") " pod="openshift-marketplace/redhat-marketplace-5shbf" Feb 20 00:14:03 crc kubenswrapper[5107]: I0220 00:14:03.423764 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-82kpt\" (UniqueName: \"kubernetes.io/projected/10fea87c-1407-4938-96ac-727dfe224f63-kube-api-access-82kpt\") pod \"redhat-marketplace-5shbf\" (UID: \"10fea87c-1407-4938-96ac-727dfe224f63\") " pod="openshift-marketplace/redhat-marketplace-5shbf" Feb 20 00:14:03 crc kubenswrapper[5107]: I0220 00:14:03.423810 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10fea87c-1407-4938-96ac-727dfe224f63-utilities\") pod \"redhat-marketplace-5shbf\" (UID: \"10fea87c-1407-4938-96ac-727dfe224f63\") " pod="openshift-marketplace/redhat-marketplace-5shbf" Feb 20 00:14:03 crc kubenswrapper[5107]: I0220 00:14:03.424227 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10fea87c-1407-4938-96ac-727dfe224f63-catalog-content\") pod \"redhat-marketplace-5shbf\" (UID: \"10fea87c-1407-4938-96ac-727dfe224f63\") " pod="openshift-marketplace/redhat-marketplace-5shbf" Feb 20 00:14:03 crc kubenswrapper[5107]: I0220 00:14:03.424282 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10fea87c-1407-4938-96ac-727dfe224f63-utilities\") pod \"redhat-marketplace-5shbf\" (UID: \"10fea87c-1407-4938-96ac-727dfe224f63\") " pod="openshift-marketplace/redhat-marketplace-5shbf" Feb 20 00:14:03 crc kubenswrapper[5107]: I0220 00:14:03.457512 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-82kpt\" (UniqueName: \"kubernetes.io/projected/10fea87c-1407-4938-96ac-727dfe224f63-kube-api-access-82kpt\") pod \"redhat-marketplace-5shbf\" (UID: \"10fea87c-1407-4938-96ac-727dfe224f63\") " pod="openshift-marketplace/redhat-marketplace-5shbf" Feb 20 00:14:03 crc kubenswrapper[5107]: I0220 00:14:03.473344 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xbhv6"] Feb 20 00:14:03 crc kubenswrapper[5107]: I0220 00:14:03.480555 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xbhv6" Feb 20 00:14:03 crc kubenswrapper[5107]: I0220 00:14:03.482850 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Feb 20 00:14:03 crc kubenswrapper[5107]: I0220 00:14:03.491177 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xbhv6"] Feb 20 00:14:03 crc kubenswrapper[5107]: I0220 00:14:03.524575 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e18e26-a4b7-4c39-a891-132a1e36d2d2-catalog-content\") pod \"redhat-operators-xbhv6\" (UID: \"79e18e26-a4b7-4c39-a891-132a1e36d2d2\") " pod="openshift-marketplace/redhat-operators-xbhv6" Feb 20 00:14:03 crc kubenswrapper[5107]: I0220 00:14:03.524638 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e18e26-a4b7-4c39-a891-132a1e36d2d2-utilities\") pod \"redhat-operators-xbhv6\" (UID: \"79e18e26-a4b7-4c39-a891-132a1e36d2d2\") " pod="openshift-marketplace/redhat-operators-xbhv6" Feb 20 00:14:03 crc kubenswrapper[5107]: I0220 00:14:03.524658 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp9fb\" (UniqueName: \"kubernetes.io/projected/79e18e26-a4b7-4c39-a891-132a1e36d2d2-kube-api-access-gp9fb\") pod \"redhat-operators-xbhv6\" (UID: \"79e18e26-a4b7-4c39-a891-132a1e36d2d2\") " pod="openshift-marketplace/redhat-operators-xbhv6" Feb 20 00:14:03 crc kubenswrapper[5107]: I0220 00:14:03.608423 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5shbf" Feb 20 00:14:03 crc kubenswrapper[5107]: I0220 00:14:03.625645 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e18e26-a4b7-4c39-a891-132a1e36d2d2-utilities\") pod \"redhat-operators-xbhv6\" (UID: \"79e18e26-a4b7-4c39-a891-132a1e36d2d2\") " pod="openshift-marketplace/redhat-operators-xbhv6" Feb 20 00:14:03 crc kubenswrapper[5107]: I0220 00:14:03.625677 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gp9fb\" (UniqueName: \"kubernetes.io/projected/79e18e26-a4b7-4c39-a891-132a1e36d2d2-kube-api-access-gp9fb\") pod \"redhat-operators-xbhv6\" (UID: \"79e18e26-a4b7-4c39-a891-132a1e36d2d2\") " pod="openshift-marketplace/redhat-operators-xbhv6" Feb 20 00:14:03 crc kubenswrapper[5107]: I0220 00:14:03.625737 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e18e26-a4b7-4c39-a891-132a1e36d2d2-catalog-content\") pod \"redhat-operators-xbhv6\" (UID: \"79e18e26-a4b7-4c39-a891-132a1e36d2d2\") " pod="openshift-marketplace/redhat-operators-xbhv6" Feb 20 00:14:03 crc kubenswrapper[5107]: I0220 00:14:03.626121 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e18e26-a4b7-4c39-a891-132a1e36d2d2-catalog-content\") pod \"redhat-operators-xbhv6\" (UID: \"79e18e26-a4b7-4c39-a891-132a1e36d2d2\") " pod="openshift-marketplace/redhat-operators-xbhv6" Feb 20 00:14:03 crc kubenswrapper[5107]: I0220 00:14:03.626341 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e18e26-a4b7-4c39-a891-132a1e36d2d2-utilities\") pod \"redhat-operators-xbhv6\" (UID: \"79e18e26-a4b7-4c39-a891-132a1e36d2d2\") " pod="openshift-marketplace/redhat-operators-xbhv6" Feb 20 00:14:03 crc kubenswrapper[5107]: I0220 00:14:03.648280 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp9fb\" (UniqueName: \"kubernetes.io/projected/79e18e26-a4b7-4c39-a891-132a1e36d2d2-kube-api-access-gp9fb\") pod \"redhat-operators-xbhv6\" (UID: \"79e18e26-a4b7-4c39-a891-132a1e36d2d2\") " pod="openshift-marketplace/redhat-operators-xbhv6" Feb 20 00:14:03 crc kubenswrapper[5107]: I0220 00:14:03.808569 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xbhv6" Feb 20 00:14:03 crc kubenswrapper[5107]: I0220 00:14:03.815811 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5shbf"] Feb 20 00:14:03 crc kubenswrapper[5107]: W0220 00:14:03.825416 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10fea87c_1407_4938_96ac_727dfe224f63.slice/crio-02034380e194cb6cf30d53b5a53d2c6781e992f74ec46f5dd7d66fad9813f276 WatchSource:0}: Error finding container 02034380e194cb6cf30d53b5a53d2c6781e992f74ec46f5dd7d66fad9813f276: Status 404 returned error can't find the container with id 02034380e194cb6cf30d53b5a53d2c6781e992f74ec46f5dd7d66fad9813f276 Feb 20 00:14:03 crc kubenswrapper[5107]: I0220 00:14:03.977396 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-tc95p"] Feb 20 00:14:03 crc kubenswrapper[5107]: I0220 00:14:03.982287 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-tc95p" Feb 20 00:14:03 crc kubenswrapper[5107]: I0220 00:14:03.988578 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-tc95p"] Feb 20 00:14:04 crc kubenswrapper[5107]: I0220 00:14:04.019887 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xbhv6"] Feb 20 00:14:04 crc kubenswrapper[5107]: I0220 00:14:04.030314 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f8e85b4b-f4ce-4d5c-8197-d964da6365fa-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-tc95p\" (UID: \"f8e85b4b-f4ce-4d5c-8197-d964da6365fa\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-tc95p" Feb 20 00:14:04 crc kubenswrapper[5107]: I0220 00:14:04.030372 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht76d\" (UniqueName: \"kubernetes.io/projected/f8e85b4b-f4ce-4d5c-8197-d964da6365fa-kube-api-access-ht76d\") pod \"image-registry-5d9d95bf5b-tc95p\" (UID: \"f8e85b4b-f4ce-4d5c-8197-d964da6365fa\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-tc95p" Feb 20 00:14:04 crc kubenswrapper[5107]: I0220 00:14:04.030396 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f8e85b4b-f4ce-4d5c-8197-d964da6365fa-bound-sa-token\") pod \"image-registry-5d9d95bf5b-tc95p\" (UID: \"f8e85b4b-f4ce-4d5c-8197-d964da6365fa\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-tc95p" Feb 20 00:14:04 crc kubenswrapper[5107]: I0220 00:14:04.030417 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f8e85b4b-f4ce-4d5c-8197-d964da6365fa-trusted-ca\") pod \"image-registry-5d9d95bf5b-tc95p\" (UID: \"f8e85b4b-f4ce-4d5c-8197-d964da6365fa\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-tc95p" Feb 20 00:14:04 crc kubenswrapper[5107]: I0220 00:14:04.030444 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-tc95p\" (UID: \"f8e85b4b-f4ce-4d5c-8197-d964da6365fa\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-tc95p" Feb 20 00:14:04 crc kubenswrapper[5107]: I0220 00:14:04.030479 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f8e85b4b-f4ce-4d5c-8197-d964da6365fa-registry-tls\") pod \"image-registry-5d9d95bf5b-tc95p\" (UID: \"f8e85b4b-f4ce-4d5c-8197-d964da6365fa\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-tc95p" Feb 20 00:14:04 crc kubenswrapper[5107]: I0220 00:14:04.030500 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f8e85b4b-f4ce-4d5c-8197-d964da6365fa-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-tc95p\" (UID: \"f8e85b4b-f4ce-4d5c-8197-d964da6365fa\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-tc95p" Feb 20 00:14:04 crc kubenswrapper[5107]: I0220 00:14:04.030534 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f8e85b4b-f4ce-4d5c-8197-d964da6365fa-registry-certificates\") pod \"image-registry-5d9d95bf5b-tc95p\" (UID: \"f8e85b4b-f4ce-4d5c-8197-d964da6365fa\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-tc95p" Feb 20 00:14:04 crc kubenswrapper[5107]: W0220 00:14:04.040057 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79e18e26_a4b7_4c39_a891_132a1e36d2d2.slice/crio-19761663cdcf1a12ad61a42bcd311b470025bb87b8ac285bee0828091a605f50 WatchSource:0}: Error finding container 19761663cdcf1a12ad61a42bcd311b470025bb87b8ac285bee0828091a605f50: Status 404 returned error can't find the container with id 19761663cdcf1a12ad61a42bcd311b470025bb87b8ac285bee0828091a605f50 Feb 20 00:14:04 crc kubenswrapper[5107]: I0220 00:14:04.067278 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-tc95p\" (UID: \"f8e85b4b-f4ce-4d5c-8197-d964da6365fa\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-tc95p" Feb 20 00:14:04 crc kubenswrapper[5107]: I0220 00:14:04.131238 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ht76d\" (UniqueName: \"kubernetes.io/projected/f8e85b4b-f4ce-4d5c-8197-d964da6365fa-kube-api-access-ht76d\") pod \"image-registry-5d9d95bf5b-tc95p\" (UID: \"f8e85b4b-f4ce-4d5c-8197-d964da6365fa\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-tc95p" Feb 20 00:14:04 crc kubenswrapper[5107]: I0220 00:14:04.131377 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f8e85b4b-f4ce-4d5c-8197-d964da6365fa-bound-sa-token\") pod \"image-registry-5d9d95bf5b-tc95p\" (UID: \"f8e85b4b-f4ce-4d5c-8197-d964da6365fa\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-tc95p" Feb 20 00:14:04 crc kubenswrapper[5107]: I0220 00:14:04.131462 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f8e85b4b-f4ce-4d5c-8197-d964da6365fa-trusted-ca\") pod \"image-registry-5d9d95bf5b-tc95p\" (UID: \"f8e85b4b-f4ce-4d5c-8197-d964da6365fa\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-tc95p" Feb 20 00:14:04 crc kubenswrapper[5107]: I0220 00:14:04.131568 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f8e85b4b-f4ce-4d5c-8197-d964da6365fa-registry-tls\") pod \"image-registry-5d9d95bf5b-tc95p\" (UID: \"f8e85b4b-f4ce-4d5c-8197-d964da6365fa\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-tc95p" Feb 20 00:14:04 crc kubenswrapper[5107]: I0220 00:14:04.131641 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f8e85b4b-f4ce-4d5c-8197-d964da6365fa-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-tc95p\" (UID: \"f8e85b4b-f4ce-4d5c-8197-d964da6365fa\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-tc95p" Feb 20 00:14:04 crc kubenswrapper[5107]: I0220 00:14:04.131735 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f8e85b4b-f4ce-4d5c-8197-d964da6365fa-registry-certificates\") pod \"image-registry-5d9d95bf5b-tc95p\" (UID: \"f8e85b4b-f4ce-4d5c-8197-d964da6365fa\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-tc95p" Feb 20 00:14:04 crc kubenswrapper[5107]: I0220 00:14:04.131802 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f8e85b4b-f4ce-4d5c-8197-d964da6365fa-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-tc95p\" (UID: \"f8e85b4b-f4ce-4d5c-8197-d964da6365fa\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-tc95p" Feb 20 00:14:04 crc kubenswrapper[5107]: I0220 00:14:04.132753 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f8e85b4b-f4ce-4d5c-8197-d964da6365fa-trusted-ca\") pod \"image-registry-5d9d95bf5b-tc95p\" (UID: \"f8e85b4b-f4ce-4d5c-8197-d964da6365fa\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-tc95p" Feb 20 00:14:04 crc kubenswrapper[5107]: I0220 00:14:04.132930 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f8e85b4b-f4ce-4d5c-8197-d964da6365fa-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-tc95p\" (UID: \"f8e85b4b-f4ce-4d5c-8197-d964da6365fa\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-tc95p" Feb 20 00:14:04 crc kubenswrapper[5107]: I0220 00:14:04.134044 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f8e85b4b-f4ce-4d5c-8197-d964da6365fa-registry-certificates\") pod \"image-registry-5d9d95bf5b-tc95p\" (UID: \"f8e85b4b-f4ce-4d5c-8197-d964da6365fa\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-tc95p" Feb 20 00:14:04 crc kubenswrapper[5107]: I0220 00:14:04.137069 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f8e85b4b-f4ce-4d5c-8197-d964da6365fa-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-tc95p\" (UID: \"f8e85b4b-f4ce-4d5c-8197-d964da6365fa\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-tc95p" Feb 20 00:14:04 crc kubenswrapper[5107]: I0220 00:14:04.137220 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f8e85b4b-f4ce-4d5c-8197-d964da6365fa-registry-tls\") pod \"image-registry-5d9d95bf5b-tc95p\" (UID: \"f8e85b4b-f4ce-4d5c-8197-d964da6365fa\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-tc95p" Feb 20 00:14:04 crc kubenswrapper[5107]: I0220 00:14:04.147752 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f8e85b4b-f4ce-4d5c-8197-d964da6365fa-bound-sa-token\") pod \"image-registry-5d9d95bf5b-tc95p\" (UID: \"f8e85b4b-f4ce-4d5c-8197-d964da6365fa\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-tc95p" Feb 20 00:14:04 crc kubenswrapper[5107]: I0220 00:14:04.152848 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht76d\" (UniqueName: \"kubernetes.io/projected/f8e85b4b-f4ce-4d5c-8197-d964da6365fa-kube-api-access-ht76d\") pod \"image-registry-5d9d95bf5b-tc95p\" (UID: \"f8e85b4b-f4ce-4d5c-8197-d964da6365fa\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-tc95p" Feb 20 00:14:04 crc kubenswrapper[5107]: I0220 00:14:04.187882 5107 generic.go:358] "Generic (PLEG): container finished" podID="10fea87c-1407-4938-96ac-727dfe224f63" containerID="e013fae56b7002233105abc400fca8791f85635a7cfff8d1f2c3a29a4f12948b" exitCode=0 Feb 20 00:14:04 crc kubenswrapper[5107]: I0220 00:14:04.187986 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5shbf" event={"ID":"10fea87c-1407-4938-96ac-727dfe224f63","Type":"ContainerDied","Data":"e013fae56b7002233105abc400fca8791f85635a7cfff8d1f2c3a29a4f12948b"} Feb 20 00:14:04 crc kubenswrapper[5107]: I0220 00:14:04.188012 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5shbf" event={"ID":"10fea87c-1407-4938-96ac-727dfe224f63","Type":"ContainerStarted","Data":"02034380e194cb6cf30d53b5a53d2c6781e992f74ec46f5dd7d66fad9813f276"} Feb 20 00:14:04 crc kubenswrapper[5107]: I0220 00:14:04.195989 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8d86r" event={"ID":"c874fdaa-8bc3-4b35-b322-420aca76db11","Type":"ContainerStarted","Data":"76af5933f90a1b8543f039899b1196f2376722eb6755ef6fa7c4b5a718bfed31"} Feb 20 00:14:04 crc kubenswrapper[5107]: I0220 00:14:04.197954 5107 generic.go:358] "Generic (PLEG): container finished" podID="79e18e26-a4b7-4c39-a891-132a1e36d2d2" containerID="89c64330a3b395fb50f7e6ba9267407dd008294bc323a0d03e35653c79773241" exitCode=0 Feb 20 00:14:04 crc kubenswrapper[5107]: I0220 00:14:04.197996 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xbhv6" event={"ID":"79e18e26-a4b7-4c39-a891-132a1e36d2d2","Type":"ContainerDied","Data":"89c64330a3b395fb50f7e6ba9267407dd008294bc323a0d03e35653c79773241"} Feb 20 00:14:04 crc kubenswrapper[5107]: I0220 00:14:04.198213 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xbhv6" event={"ID":"79e18e26-a4b7-4c39-a891-132a1e36d2d2","Type":"ContainerStarted","Data":"19761663cdcf1a12ad61a42bcd311b470025bb87b8ac285bee0828091a605f50"} Feb 20 00:14:04 crc kubenswrapper[5107]: I0220 00:14:04.248234 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8d86r" podStartSLOduration=2.6061170750000002 podStartE2EDuration="3.248213666s" podCreationTimestamp="2026-02-20 00:14:01 +0000 UTC" firstStartedPulling="2026-02-20 00:14:02.171775994 +0000 UTC m=+328.540433590" lastFinishedPulling="2026-02-20 00:14:02.813872605 +0000 UTC m=+329.182530181" observedRunningTime="2026-02-20 00:14:04.244911872 +0000 UTC m=+330.613569428" watchObservedRunningTime="2026-02-20 00:14:04.248213666 +0000 UTC m=+330.616871242" Feb 20 00:14:04 crc kubenswrapper[5107]: I0220 00:14:04.350484 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-tc95p" Feb 20 00:14:04 crc kubenswrapper[5107]: I0220 00:14:04.573425 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-tc95p"] Feb 20 00:14:04 crc kubenswrapper[5107]: W0220 00:14:04.578432 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8e85b4b_f4ce_4d5c_8197_d964da6365fa.slice/crio-de0c7619de5bf03c432ae100dd52d0ba8f422ec9884d342685d4aa215e6801dd WatchSource:0}: Error finding container de0c7619de5bf03c432ae100dd52d0ba8f422ec9884d342685d4aa215e6801dd: Status 404 returned error can't find the container with id de0c7619de5bf03c432ae100dd52d0ba8f422ec9884d342685d4aa215e6801dd Feb 20 00:14:05 crc kubenswrapper[5107]: I0220 00:14:05.204313 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xbhv6" event={"ID":"79e18e26-a4b7-4c39-a891-132a1e36d2d2","Type":"ContainerStarted","Data":"0683bf632f88b67ae257c460536f67a904c41fc5190d6fc694fdcdc627eb3ef8"} Feb 20 00:14:05 crc kubenswrapper[5107]: I0220 00:14:05.206117 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-tc95p" event={"ID":"f8e85b4b-f4ce-4d5c-8197-d964da6365fa","Type":"ContainerStarted","Data":"f33e084e1a484529d28686756075afdaddce5fa95aaf4af0e927dcacc4aa4a94"} Feb 20 00:14:05 crc kubenswrapper[5107]: I0220 00:14:05.206183 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-tc95p" event={"ID":"f8e85b4b-f4ce-4d5c-8197-d964da6365fa","Type":"ContainerStarted","Data":"de0c7619de5bf03c432ae100dd52d0ba8f422ec9884d342685d4aa215e6801dd"} Feb 20 00:14:05 crc kubenswrapper[5107]: I0220 00:14:05.206639 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-tc95p" Feb 20 00:14:05 crc kubenswrapper[5107]: I0220 00:14:05.209238 5107 generic.go:358] "Generic (PLEG): container finished" podID="10fea87c-1407-4938-96ac-727dfe224f63" containerID="06289815dee05c97e7c8be61fc3c1eb38c0ffd42015231890d56bb716c2f3cf1" exitCode=0 Feb 20 00:14:05 crc kubenswrapper[5107]: I0220 00:14:05.210260 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5shbf" event={"ID":"10fea87c-1407-4938-96ac-727dfe224f63","Type":"ContainerDied","Data":"06289815dee05c97e7c8be61fc3c1eb38c0ffd42015231890d56bb716c2f3cf1"} Feb 20 00:14:05 crc kubenswrapper[5107]: I0220 00:14:05.664075 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-tc95p" podStartSLOduration=2.664059941 podStartE2EDuration="2.664059941s" podCreationTimestamp="2026-02-20 00:14:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:14:05.261424552 +0000 UTC m=+331.630082118" watchObservedRunningTime="2026-02-20 00:14:05.664059941 +0000 UTC m=+332.032717507" Feb 20 00:14:05 crc kubenswrapper[5107]: I0220 00:14:05.665294 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rtklb"] Feb 20 00:14:05 crc kubenswrapper[5107]: I0220 00:14:05.673616 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rtklb" Feb 20 00:14:05 crc kubenswrapper[5107]: I0220 00:14:05.675088 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rtklb"] Feb 20 00:14:05 crc kubenswrapper[5107]: I0220 00:14:05.676108 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Feb 20 00:14:05 crc kubenswrapper[5107]: I0220 00:14:05.771107 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a99f89f1-7b96-4a0d-9de6-5930f350e330-catalog-content\") pod \"community-operators-rtklb\" (UID: \"a99f89f1-7b96-4a0d-9de6-5930f350e330\") " pod="openshift-marketplace/community-operators-rtklb" Feb 20 00:14:05 crc kubenswrapper[5107]: I0220 00:14:05.771239 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a99f89f1-7b96-4a0d-9de6-5930f350e330-utilities\") pod \"community-operators-rtklb\" (UID: \"a99f89f1-7b96-4a0d-9de6-5930f350e330\") " pod="openshift-marketplace/community-operators-rtklb" Feb 20 00:14:05 crc kubenswrapper[5107]: I0220 00:14:05.771293 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvqpj\" (UniqueName: \"kubernetes.io/projected/a99f89f1-7b96-4a0d-9de6-5930f350e330-kube-api-access-dvqpj\") pod \"community-operators-rtklb\" (UID: \"a99f89f1-7b96-4a0d-9de6-5930f350e330\") " pod="openshift-marketplace/community-operators-rtklb" Feb 20 00:14:05 crc kubenswrapper[5107]: I0220 00:14:05.873087 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a99f89f1-7b96-4a0d-9de6-5930f350e330-utilities\") pod \"community-operators-rtklb\" (UID: \"a99f89f1-7b96-4a0d-9de6-5930f350e330\") " pod="openshift-marketplace/community-operators-rtklb" Feb 20 00:14:05 crc kubenswrapper[5107]: I0220 00:14:05.873200 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dvqpj\" (UniqueName: \"kubernetes.io/projected/a99f89f1-7b96-4a0d-9de6-5930f350e330-kube-api-access-dvqpj\") pod \"community-operators-rtklb\" (UID: \"a99f89f1-7b96-4a0d-9de6-5930f350e330\") " pod="openshift-marketplace/community-operators-rtklb" Feb 20 00:14:05 crc kubenswrapper[5107]: I0220 00:14:05.873258 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a99f89f1-7b96-4a0d-9de6-5930f350e330-catalog-content\") pod \"community-operators-rtklb\" (UID: \"a99f89f1-7b96-4a0d-9de6-5930f350e330\") " pod="openshift-marketplace/community-operators-rtklb" Feb 20 00:14:05 crc kubenswrapper[5107]: I0220 00:14:05.873616 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a99f89f1-7b96-4a0d-9de6-5930f350e330-utilities\") pod \"community-operators-rtklb\" (UID: \"a99f89f1-7b96-4a0d-9de6-5930f350e330\") " pod="openshift-marketplace/community-operators-rtklb" Feb 20 00:14:05 crc kubenswrapper[5107]: I0220 00:14:05.873744 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a99f89f1-7b96-4a0d-9de6-5930f350e330-catalog-content\") pod \"community-operators-rtklb\" (UID: \"a99f89f1-7b96-4a0d-9de6-5930f350e330\") " pod="openshift-marketplace/community-operators-rtklb" Feb 20 00:14:05 crc kubenswrapper[5107]: I0220 00:14:05.910615 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvqpj\" (UniqueName: \"kubernetes.io/projected/a99f89f1-7b96-4a0d-9de6-5930f350e330-kube-api-access-dvqpj\") pod \"community-operators-rtklb\" (UID: \"a99f89f1-7b96-4a0d-9de6-5930f350e330\") " pod="openshift-marketplace/community-operators-rtklb" Feb 20 00:14:05 crc kubenswrapper[5107]: I0220 00:14:05.986435 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rtklb" Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.162431 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rtklb"] Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.219501 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rtklb" event={"ID":"a99f89f1-7b96-4a0d-9de6-5930f350e330","Type":"ContainerStarted","Data":"99a0e1453c8f03acb98ef3c4eaf69bb0a73f6ca5d2a88d87bbc95a18db3a655a"} Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.223091 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5shbf" event={"ID":"10fea87c-1407-4938-96ac-727dfe224f63","Type":"ContainerStarted","Data":"bf603dac2190d6348e91edec61ccf102e3f4fc22ed06db3c04869119c682e0ad"} Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.224908 5107 generic.go:358] "Generic (PLEG): container finished" podID="79e18e26-a4b7-4c39-a891-132a1e36d2d2" containerID="0683bf632f88b67ae257c460536f67a904c41fc5190d6fc694fdcdc627eb3ef8" exitCode=0 Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.224930 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xbhv6" event={"ID":"79e18e26-a4b7-4c39-a891-132a1e36d2d2","Type":"ContainerDied","Data":"0683bf632f88b67ae257c460536f67a904c41fc5190d6fc694fdcdc627eb3ef8"} Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.392357 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5shbf" podStartSLOduration=2.841874026 podStartE2EDuration="3.392338197s" podCreationTimestamp="2026-02-20 00:14:03 +0000 UTC" firstStartedPulling="2026-02-20 00:14:04.188804211 +0000 UTC m=+330.557461777" lastFinishedPulling="2026-02-20 00:14:04.739268382 +0000 UTC m=+331.107925948" observedRunningTime="2026-02-20 00:14:06.262309889 +0000 UTC m=+332.630967455" watchObservedRunningTime="2026-02-20 00:14:06.392338197 +0000 UTC m=+332.760995763" Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.392766 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5f94b4dd44-pxgvx"] Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.393033 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5f94b4dd44-pxgvx" podUID="d7968202-8763-48c2-88c6-b58c629a7e4b" containerName="controller-manager" containerID="cri-o://0f62f08361e4706c46d6caea17d85d06b4eeb2d6dcf45d2c445586e50e5cd2ec" gracePeriod=30 Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.758426 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f94b4dd44-pxgvx" Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.792304 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-54cf5d9fd7-msqwb"] Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.793374 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d7968202-8763-48c2-88c6-b58c629a7e4b" containerName="controller-manager" Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.793399 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7968202-8763-48c2-88c6-b58c629a7e4b" containerName="controller-manager" Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.793526 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="d7968202-8763-48c2-88c6-b58c629a7e4b" containerName="controller-manager" Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.799717 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54cf5d9fd7-msqwb" Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.815966 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54cf5d9fd7-msqwb"] Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.888572 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7968202-8763-48c2-88c6-b58c629a7e4b-serving-cert\") pod \"d7968202-8763-48c2-88c6-b58c629a7e4b\" (UID: \"d7968202-8763-48c2-88c6-b58c629a7e4b\") " Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.888665 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d7968202-8763-48c2-88c6-b58c629a7e4b-proxy-ca-bundles\") pod \"d7968202-8763-48c2-88c6-b58c629a7e4b\" (UID: \"d7968202-8763-48c2-88c6-b58c629a7e4b\") " Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.888731 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm2lq\" (UniqueName: \"kubernetes.io/projected/d7968202-8763-48c2-88c6-b58c629a7e4b-kube-api-access-cm2lq\") pod \"d7968202-8763-48c2-88c6-b58c629a7e4b\" (UID: \"d7968202-8763-48c2-88c6-b58c629a7e4b\") " Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.888873 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7968202-8763-48c2-88c6-b58c629a7e4b-client-ca\") pod \"d7968202-8763-48c2-88c6-b58c629a7e4b\" (UID: \"d7968202-8763-48c2-88c6-b58c629a7e4b\") " Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.888943 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7968202-8763-48c2-88c6-b58c629a7e4b-config\") pod \"d7968202-8763-48c2-88c6-b58c629a7e4b\" (UID: \"d7968202-8763-48c2-88c6-b58c629a7e4b\") " Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.889014 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d7968202-8763-48c2-88c6-b58c629a7e4b-tmp\") pod \"d7968202-8763-48c2-88c6-b58c629a7e4b\" (UID: \"d7968202-8763-48c2-88c6-b58c629a7e4b\") " Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.889136 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d480849-8d8c-465e-b2f9-1b2d37c98e52-proxy-ca-bundles\") pod \"controller-manager-54cf5d9fd7-msqwb\" (UID: \"3d480849-8d8c-465e-b2f9-1b2d37c98e52\") " pod="openshift-controller-manager/controller-manager-54cf5d9fd7-msqwb" Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.889181 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b8tq\" (UniqueName: \"kubernetes.io/projected/3d480849-8d8c-465e-b2f9-1b2d37c98e52-kube-api-access-5b8tq\") pod \"controller-manager-54cf5d9fd7-msqwb\" (UID: \"3d480849-8d8c-465e-b2f9-1b2d37c98e52\") " pod="openshift-controller-manager/controller-manager-54cf5d9fd7-msqwb" Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.889254 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3d480849-8d8c-465e-b2f9-1b2d37c98e52-tmp\") pod \"controller-manager-54cf5d9fd7-msqwb\" (UID: \"3d480849-8d8c-465e-b2f9-1b2d37c98e52\") " pod="openshift-controller-manager/controller-manager-54cf5d9fd7-msqwb" Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.889301 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d480849-8d8c-465e-b2f9-1b2d37c98e52-config\") pod \"controller-manager-54cf5d9fd7-msqwb\" (UID: \"3d480849-8d8c-465e-b2f9-1b2d37c98e52\") " pod="openshift-controller-manager/controller-manager-54cf5d9fd7-msqwb" Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.889387 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d480849-8d8c-465e-b2f9-1b2d37c98e52-client-ca\") pod \"controller-manager-54cf5d9fd7-msqwb\" (UID: \"3d480849-8d8c-465e-b2f9-1b2d37c98e52\") " pod="openshift-controller-manager/controller-manager-54cf5d9fd7-msqwb" Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.889588 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7968202-8763-48c2-88c6-b58c629a7e4b-tmp" (OuterVolumeSpecName: "tmp") pod "d7968202-8763-48c2-88c6-b58c629a7e4b" (UID: "d7968202-8763-48c2-88c6-b58c629a7e4b"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.889601 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d480849-8d8c-465e-b2f9-1b2d37c98e52-serving-cert\") pod \"controller-manager-54cf5d9fd7-msqwb\" (UID: \"3d480849-8d8c-465e-b2f9-1b2d37c98e52\") " pod="openshift-controller-manager/controller-manager-54cf5d9fd7-msqwb" Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.889749 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d7968202-8763-48c2-88c6-b58c629a7e4b-tmp\") on node \"crc\" DevicePath \"\"" Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.889795 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7968202-8763-48c2-88c6-b58c629a7e4b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d7968202-8763-48c2-88c6-b58c629a7e4b" (UID: "d7968202-8763-48c2-88c6-b58c629a7e4b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.889859 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7968202-8763-48c2-88c6-b58c629a7e4b-config" (OuterVolumeSpecName: "config") pod "d7968202-8763-48c2-88c6-b58c629a7e4b" (UID: "d7968202-8763-48c2-88c6-b58c629a7e4b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.890385 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7968202-8763-48c2-88c6-b58c629a7e4b-client-ca" (OuterVolumeSpecName: "client-ca") pod "d7968202-8763-48c2-88c6-b58c629a7e4b" (UID: "d7968202-8763-48c2-88c6-b58c629a7e4b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.895815 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7968202-8763-48c2-88c6-b58c629a7e4b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7968202-8763-48c2-88c6-b58c629a7e4b" (UID: "d7968202-8763-48c2-88c6-b58c629a7e4b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.895828 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7968202-8763-48c2-88c6-b58c629a7e4b-kube-api-access-cm2lq" (OuterVolumeSpecName: "kube-api-access-cm2lq") pod "d7968202-8763-48c2-88c6-b58c629a7e4b" (UID: "d7968202-8763-48c2-88c6-b58c629a7e4b"). InnerVolumeSpecName "kube-api-access-cm2lq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.991012 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d480849-8d8c-465e-b2f9-1b2d37c98e52-config\") pod \"controller-manager-54cf5d9fd7-msqwb\" (UID: \"3d480849-8d8c-465e-b2f9-1b2d37c98e52\") " pod="openshift-controller-manager/controller-manager-54cf5d9fd7-msqwb" Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.991051 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d480849-8d8c-465e-b2f9-1b2d37c98e52-client-ca\") pod \"controller-manager-54cf5d9fd7-msqwb\" (UID: \"3d480849-8d8c-465e-b2f9-1b2d37c98e52\") " pod="openshift-controller-manager/controller-manager-54cf5d9fd7-msqwb" Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.991101 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d480849-8d8c-465e-b2f9-1b2d37c98e52-serving-cert\") pod \"controller-manager-54cf5d9fd7-msqwb\" (UID: \"3d480849-8d8c-465e-b2f9-1b2d37c98e52\") " pod="openshift-controller-manager/controller-manager-54cf5d9fd7-msqwb" Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.991256 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d480849-8d8c-465e-b2f9-1b2d37c98e52-proxy-ca-bundles\") pod \"controller-manager-54cf5d9fd7-msqwb\" (UID: \"3d480849-8d8c-465e-b2f9-1b2d37c98e52\") " pod="openshift-controller-manager/controller-manager-54cf5d9fd7-msqwb" Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.991302 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5b8tq\" (UniqueName: \"kubernetes.io/projected/3d480849-8d8c-465e-b2f9-1b2d37c98e52-kube-api-access-5b8tq\") pod \"controller-manager-54cf5d9fd7-msqwb\" (UID: \"3d480849-8d8c-465e-b2f9-1b2d37c98e52\") " pod="openshift-controller-manager/controller-manager-54cf5d9fd7-msqwb" Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.991455 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3d480849-8d8c-465e-b2f9-1b2d37c98e52-tmp\") pod \"controller-manager-54cf5d9fd7-msqwb\" (UID: \"3d480849-8d8c-465e-b2f9-1b2d37c98e52\") " pod="openshift-controller-manager/controller-manager-54cf5d9fd7-msqwb" Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.991569 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cm2lq\" (UniqueName: \"kubernetes.io/projected/d7968202-8763-48c2-88c6-b58c629a7e4b-kube-api-access-cm2lq\") on node \"crc\" DevicePath \"\"" Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.991585 5107 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7968202-8763-48c2-88c6-b58c629a7e4b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.991594 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7968202-8763-48c2-88c6-b58c629a7e4b-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.991604 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7968202-8763-48c2-88c6-b58c629a7e4b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.991613 5107 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d7968202-8763-48c2-88c6-b58c629a7e4b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.991998 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3d480849-8d8c-465e-b2f9-1b2d37c98e52-tmp\") pod \"controller-manager-54cf5d9fd7-msqwb\" (UID: \"3d480849-8d8c-465e-b2f9-1b2d37c98e52\") " pod="openshift-controller-manager/controller-manager-54cf5d9fd7-msqwb" Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.992374 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d480849-8d8c-465e-b2f9-1b2d37c98e52-proxy-ca-bundles\") pod \"controller-manager-54cf5d9fd7-msqwb\" (UID: \"3d480849-8d8c-465e-b2f9-1b2d37c98e52\") " pod="openshift-controller-manager/controller-manager-54cf5d9fd7-msqwb" Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.992504 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d480849-8d8c-465e-b2f9-1b2d37c98e52-config\") pod \"controller-manager-54cf5d9fd7-msqwb\" (UID: \"3d480849-8d8c-465e-b2f9-1b2d37c98e52\") " pod="openshift-controller-manager/controller-manager-54cf5d9fd7-msqwb" Feb 20 00:14:06 crc kubenswrapper[5107]: I0220 00:14:06.994416 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d480849-8d8c-465e-b2f9-1b2d37c98e52-client-ca\") pod \"controller-manager-54cf5d9fd7-msqwb\" (UID: \"3d480849-8d8c-465e-b2f9-1b2d37c98e52\") " pod="openshift-controller-manager/controller-manager-54cf5d9fd7-msqwb" Feb 20 00:14:07 crc kubenswrapper[5107]: I0220 00:14:07.002080 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d480849-8d8c-465e-b2f9-1b2d37c98e52-serving-cert\") pod \"controller-manager-54cf5d9fd7-msqwb\" (UID: \"3d480849-8d8c-465e-b2f9-1b2d37c98e52\") " pod="openshift-controller-manager/controller-manager-54cf5d9fd7-msqwb" Feb 20 00:14:07 crc kubenswrapper[5107]: I0220 00:14:07.006452 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b8tq\" (UniqueName: \"kubernetes.io/projected/3d480849-8d8c-465e-b2f9-1b2d37c98e52-kube-api-access-5b8tq\") pod \"controller-manager-54cf5d9fd7-msqwb\" (UID: \"3d480849-8d8c-465e-b2f9-1b2d37c98e52\") " pod="openshift-controller-manager/controller-manager-54cf5d9fd7-msqwb" Feb 20 00:14:07 crc kubenswrapper[5107]: I0220 00:14:07.124568 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54cf5d9fd7-msqwb" Feb 20 00:14:07 crc kubenswrapper[5107]: I0220 00:14:07.231503 5107 generic.go:358] "Generic (PLEG): container finished" podID="a99f89f1-7b96-4a0d-9de6-5930f350e330" containerID="21b3e9317513bd1691ded5d94782e3500c6fa3f116e2c9a06632fd3bba3c2df0" exitCode=0 Feb 20 00:14:07 crc kubenswrapper[5107]: I0220 00:14:07.231579 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rtklb" event={"ID":"a99f89f1-7b96-4a0d-9de6-5930f350e330","Type":"ContainerDied","Data":"21b3e9317513bd1691ded5d94782e3500c6fa3f116e2c9a06632fd3bba3c2df0"} Feb 20 00:14:07 crc kubenswrapper[5107]: I0220 00:14:07.236582 5107 generic.go:358] "Generic (PLEG): container finished" podID="d7968202-8763-48c2-88c6-b58c629a7e4b" containerID="0f62f08361e4706c46d6caea17d85d06b4eeb2d6dcf45d2c445586e50e5cd2ec" exitCode=0 Feb 20 00:14:07 crc kubenswrapper[5107]: I0220 00:14:07.236720 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f94b4dd44-pxgvx" event={"ID":"d7968202-8763-48c2-88c6-b58c629a7e4b","Type":"ContainerDied","Data":"0f62f08361e4706c46d6caea17d85d06b4eeb2d6dcf45d2c445586e50e5cd2ec"} Feb 20 00:14:07 crc kubenswrapper[5107]: I0220 00:14:07.236761 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f94b4dd44-pxgvx" event={"ID":"d7968202-8763-48c2-88c6-b58c629a7e4b","Type":"ContainerDied","Data":"9d77d179208cd9001e763f3225511ce4dfb882c434cf6929e6c3b6c66a21e1d4"} Feb 20 00:14:07 crc kubenswrapper[5107]: I0220 00:14:07.236784 5107 scope.go:117] "RemoveContainer" containerID="0f62f08361e4706c46d6caea17d85d06b4eeb2d6dcf45d2c445586e50e5cd2ec" Feb 20 00:14:07 crc kubenswrapper[5107]: I0220 00:14:07.236686 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f94b4dd44-pxgvx" Feb 20 00:14:07 crc kubenswrapper[5107]: I0220 00:14:07.259983 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xbhv6" event={"ID":"79e18e26-a4b7-4c39-a891-132a1e36d2d2","Type":"ContainerStarted","Data":"973d3156915882e94e128e6fa11560a43b55ad0d875c50d63cc35b4d3bc09a0a"} Feb 20 00:14:07 crc kubenswrapper[5107]: I0220 00:14:07.286991 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5f94b4dd44-pxgvx"] Feb 20 00:14:07 crc kubenswrapper[5107]: I0220 00:14:07.298423 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5f94b4dd44-pxgvx"] Feb 20 00:14:07 crc kubenswrapper[5107]: I0220 00:14:07.313537 5107 scope.go:117] "RemoveContainer" containerID="0f62f08361e4706c46d6caea17d85d06b4eeb2d6dcf45d2c445586e50e5cd2ec" Feb 20 00:14:07 crc kubenswrapper[5107]: E0220 00:14:07.314096 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f62f08361e4706c46d6caea17d85d06b4eeb2d6dcf45d2c445586e50e5cd2ec\": container with ID starting with 0f62f08361e4706c46d6caea17d85d06b4eeb2d6dcf45d2c445586e50e5cd2ec not found: ID does not exist" containerID="0f62f08361e4706c46d6caea17d85d06b4eeb2d6dcf45d2c445586e50e5cd2ec" Feb 20 00:14:07 crc kubenswrapper[5107]: I0220 00:14:07.314175 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f62f08361e4706c46d6caea17d85d06b4eeb2d6dcf45d2c445586e50e5cd2ec"} err="failed to get container status \"0f62f08361e4706c46d6caea17d85d06b4eeb2d6dcf45d2c445586e50e5cd2ec\": rpc error: code = NotFound desc = could not find container \"0f62f08361e4706c46d6caea17d85d06b4eeb2d6dcf45d2c445586e50e5cd2ec\": container with ID starting with 0f62f08361e4706c46d6caea17d85d06b4eeb2d6dcf45d2c445586e50e5cd2ec not found: ID does not exist" Feb 20 00:14:07 crc kubenswrapper[5107]: I0220 00:14:07.316533 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xbhv6" podStartSLOduration=3.67909356 podStartE2EDuration="4.316515188s" podCreationTimestamp="2026-02-20 00:14:03 +0000 UTC" firstStartedPulling="2026-02-20 00:14:04.198833675 +0000 UTC m=+330.567491241" lastFinishedPulling="2026-02-20 00:14:04.836255303 +0000 UTC m=+331.204912869" observedRunningTime="2026-02-20 00:14:07.313864673 +0000 UTC m=+333.682522239" watchObservedRunningTime="2026-02-20 00:14:07.316515188 +0000 UTC m=+333.685172754" Feb 20 00:14:07 crc kubenswrapper[5107]: I0220 00:14:07.389094 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54cf5d9fd7-msqwb"] Feb 20 00:14:08 crc kubenswrapper[5107]: I0220 00:14:08.265464 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54cf5d9fd7-msqwb" event={"ID":"3d480849-8d8c-465e-b2f9-1b2d37c98e52","Type":"ContainerStarted","Data":"1c9280334b4f10df12280420833603d91d62915dc2eb9f1a7085707f6e4f1234"} Feb 20 00:14:08 crc kubenswrapper[5107]: I0220 00:14:08.265926 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54cf5d9fd7-msqwb" event={"ID":"3d480849-8d8c-465e-b2f9-1b2d37c98e52","Type":"ContainerStarted","Data":"b84a80ed7903bed44a95caaff9a5f2676c1bd31b5fb6f9977fdef73691c0c296"} Feb 20 00:14:08 crc kubenswrapper[5107]: I0220 00:14:08.265940 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-54cf5d9fd7-msqwb" Feb 20 00:14:08 crc kubenswrapper[5107]: I0220 00:14:08.268886 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rtklb" event={"ID":"a99f89f1-7b96-4a0d-9de6-5930f350e330","Type":"ContainerStarted","Data":"c82581b1b56d27f17bf10c76e3dc7290fe6c968727f07556e2bc2ec5b136678e"} Feb 20 00:14:08 crc kubenswrapper[5107]: I0220 00:14:08.281941 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-54cf5d9fd7-msqwb" podStartSLOduration=2.281919459 podStartE2EDuration="2.281919459s" podCreationTimestamp="2026-02-20 00:14:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:14:08.280507809 +0000 UTC m=+334.649165375" watchObservedRunningTime="2026-02-20 00:14:08.281919459 +0000 UTC m=+334.650577025" Feb 20 00:14:08 crc kubenswrapper[5107]: I0220 00:14:08.493120 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7968202-8763-48c2-88c6-b58c629a7e4b" path="/var/lib/kubelet/pods/d7968202-8763-48c2-88c6-b58c629a7e4b/volumes" Feb 20 00:14:08 crc kubenswrapper[5107]: I0220 00:14:08.540038 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-54cf5d9fd7-msqwb" Feb 20 00:14:09 crc kubenswrapper[5107]: I0220 00:14:09.276439 5107 generic.go:358] "Generic (PLEG): container finished" podID="a99f89f1-7b96-4a0d-9de6-5930f350e330" containerID="c82581b1b56d27f17bf10c76e3dc7290fe6c968727f07556e2bc2ec5b136678e" exitCode=0 Feb 20 00:14:09 crc kubenswrapper[5107]: I0220 00:14:09.276500 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rtklb" event={"ID":"a99f89f1-7b96-4a0d-9de6-5930f350e330","Type":"ContainerDied","Data":"c82581b1b56d27f17bf10c76e3dc7290fe6c968727f07556e2bc2ec5b136678e"} Feb 20 00:14:10 crc kubenswrapper[5107]: I0220 00:14:10.285054 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rtklb" event={"ID":"a99f89f1-7b96-4a0d-9de6-5930f350e330","Type":"ContainerStarted","Data":"31cd0cd16a5a9f840887da93d8288f445807617050c4942771a79244be25ac39"} Feb 20 00:14:10 crc kubenswrapper[5107]: I0220 00:14:10.315831 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rtklb" podStartSLOduration=4.693440042 podStartE2EDuration="5.315798843s" podCreationTimestamp="2026-02-20 00:14:05 +0000 UTC" firstStartedPulling="2026-02-20 00:14:07.232443344 +0000 UTC m=+333.601100920" lastFinishedPulling="2026-02-20 00:14:07.854802155 +0000 UTC m=+334.223459721" observedRunningTime="2026-02-20 00:14:10.308609079 +0000 UTC m=+336.677266645" watchObservedRunningTime="2026-02-20 00:14:10.315798843 +0000 UTC m=+336.684456419" Feb 20 00:14:11 crc kubenswrapper[5107]: I0220 00:14:11.422272 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-8d86r" Feb 20 00:14:11 crc kubenswrapper[5107]: I0220 00:14:11.422722 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8d86r" Feb 20 00:14:11 crc kubenswrapper[5107]: I0220 00:14:11.465500 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8d86r" Feb 20 00:14:12 crc kubenswrapper[5107]: I0220 00:14:12.349357 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8d86r" Feb 20 00:14:13 crc kubenswrapper[5107]: I0220 00:14:13.609296 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5shbf" Feb 20 00:14:13 crc kubenswrapper[5107]: I0220 00:14:13.609995 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-5shbf" Feb 20 00:14:13 crc kubenswrapper[5107]: I0220 00:14:13.668434 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5shbf" Feb 20 00:14:13 crc kubenswrapper[5107]: I0220 00:14:13.812098 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xbhv6" Feb 20 00:14:13 crc kubenswrapper[5107]: I0220 00:14:13.812166 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-xbhv6" Feb 20 00:14:13 crc kubenswrapper[5107]: I0220 00:14:13.853078 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xbhv6" Feb 20 00:14:14 crc kubenswrapper[5107]: I0220 00:14:14.370231 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xbhv6" Feb 20 00:14:14 crc kubenswrapper[5107]: I0220 00:14:14.412358 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5shbf" Feb 20 00:14:15 crc kubenswrapper[5107]: I0220 00:14:15.987238 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-rtklb" Feb 20 00:14:15 crc kubenswrapper[5107]: I0220 00:14:15.987303 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rtklb" Feb 20 00:14:16 crc kubenswrapper[5107]: I0220 00:14:16.032916 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rtklb" Feb 20 00:14:16 crc kubenswrapper[5107]: I0220 00:14:16.364135 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rtklb" Feb 20 00:14:26 crc kubenswrapper[5107]: I0220 00:14:26.395647 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84d6978b89-fcwdd"] Feb 20 00:14:26 crc kubenswrapper[5107]: I0220 00:14:26.396389 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-84d6978b89-fcwdd" podUID="53af46bb-34e6-406b-b2b2-6745d2b0263c" containerName="route-controller-manager" containerID="cri-o://d4b532e33e715fb96ee7982c6905417d0d82c4c83d33156397252032a35c3870" gracePeriod=30 Feb 20 00:14:26 crc kubenswrapper[5107]: I0220 00:14:26.719514 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84d6978b89-fcwdd" Feb 20 00:14:26 crc kubenswrapper[5107]: I0220 00:14:26.765671 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d96b79b56-6lqgc"] Feb 20 00:14:26 crc kubenswrapper[5107]: I0220 00:14:26.767322 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="53af46bb-34e6-406b-b2b2-6745d2b0263c" containerName="route-controller-manager" Feb 20 00:14:26 crc kubenswrapper[5107]: I0220 00:14:26.767389 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="53af46bb-34e6-406b-b2b2-6745d2b0263c" containerName="route-controller-manager" Feb 20 00:14:26 crc kubenswrapper[5107]: I0220 00:14:26.767782 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="53af46bb-34e6-406b-b2b2-6745d2b0263c" containerName="route-controller-manager" Feb 20 00:14:26 crc kubenswrapper[5107]: I0220 00:14:26.772837 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-6lqgc" Feb 20 00:14:26 crc kubenswrapper[5107]: I0220 00:14:26.777596 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d96b79b56-6lqgc"] Feb 20 00:14:26 crc kubenswrapper[5107]: I0220 00:14:26.815351 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53af46bb-34e6-406b-b2b2-6745d2b0263c-config\") pod \"53af46bb-34e6-406b-b2b2-6745d2b0263c\" (UID: \"53af46bb-34e6-406b-b2b2-6745d2b0263c\") " Feb 20 00:14:26 crc kubenswrapper[5107]: I0220 00:14:26.815390 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53af46bb-34e6-406b-b2b2-6745d2b0263c-client-ca\") pod \"53af46bb-34e6-406b-b2b2-6745d2b0263c\" (UID: \"53af46bb-34e6-406b-b2b2-6745d2b0263c\") " Feb 20 00:14:26 crc kubenswrapper[5107]: I0220 00:14:26.815419 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/53af46bb-34e6-406b-b2b2-6745d2b0263c-tmp\") pod \"53af46bb-34e6-406b-b2b2-6745d2b0263c\" (UID: \"53af46bb-34e6-406b-b2b2-6745d2b0263c\") " Feb 20 00:14:26 crc kubenswrapper[5107]: I0220 00:14:26.815449 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53af46bb-34e6-406b-b2b2-6745d2b0263c-serving-cert\") pod \"53af46bb-34e6-406b-b2b2-6745d2b0263c\" (UID: \"53af46bb-34e6-406b-b2b2-6745d2b0263c\") " Feb 20 00:14:26 crc kubenswrapper[5107]: I0220 00:14:26.815582 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7jsk\" (UniqueName: \"kubernetes.io/projected/53af46bb-34e6-406b-b2b2-6745d2b0263c-kube-api-access-n7jsk\") pod \"53af46bb-34e6-406b-b2b2-6745d2b0263c\" (UID: \"53af46bb-34e6-406b-b2b2-6745d2b0263c\") " Feb 20 00:14:26 crc kubenswrapper[5107]: I0220 00:14:26.815974 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53af46bb-34e6-406b-b2b2-6745d2b0263c-tmp" (OuterVolumeSpecName: "tmp") pod "53af46bb-34e6-406b-b2b2-6745d2b0263c" (UID: "53af46bb-34e6-406b-b2b2-6745d2b0263c"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:14:26 crc kubenswrapper[5107]: I0220 00:14:26.816394 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53af46bb-34e6-406b-b2b2-6745d2b0263c-config" (OuterVolumeSpecName: "config") pod "53af46bb-34e6-406b-b2b2-6745d2b0263c" (UID: "53af46bb-34e6-406b-b2b2-6745d2b0263c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:14:26 crc kubenswrapper[5107]: I0220 00:14:26.816412 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53af46bb-34e6-406b-b2b2-6745d2b0263c-client-ca" (OuterVolumeSpecName: "client-ca") pod "53af46bb-34e6-406b-b2b2-6745d2b0263c" (UID: "53af46bb-34e6-406b-b2b2-6745d2b0263c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:14:26 crc kubenswrapper[5107]: I0220 00:14:26.823304 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53af46bb-34e6-406b-b2b2-6745d2b0263c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "53af46bb-34e6-406b-b2b2-6745d2b0263c" (UID: "53af46bb-34e6-406b-b2b2-6745d2b0263c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:14:26 crc kubenswrapper[5107]: I0220 00:14:26.823331 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53af46bb-34e6-406b-b2b2-6745d2b0263c-kube-api-access-n7jsk" (OuterVolumeSpecName: "kube-api-access-n7jsk") pod "53af46bb-34e6-406b-b2b2-6745d2b0263c" (UID: "53af46bb-34e6-406b-b2b2-6745d2b0263c"). InnerVolumeSpecName "kube-api-access-n7jsk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:14:26 crc kubenswrapper[5107]: I0220 00:14:26.916476 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtdts\" (UniqueName: \"kubernetes.io/projected/0acd81ee-5493-4083-9dd1-751d8d0b4090-kube-api-access-xtdts\") pod \"route-controller-manager-5d96b79b56-6lqgc\" (UID: \"0acd81ee-5493-4083-9dd1-751d8d0b4090\") " pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-6lqgc" Feb 20 00:14:26 crc kubenswrapper[5107]: I0220 00:14:26.916532 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0acd81ee-5493-4083-9dd1-751d8d0b4090-tmp\") pod \"route-controller-manager-5d96b79b56-6lqgc\" (UID: \"0acd81ee-5493-4083-9dd1-751d8d0b4090\") " pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-6lqgc" Feb 20 00:14:26 crc kubenswrapper[5107]: I0220 00:14:26.916601 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0acd81ee-5493-4083-9dd1-751d8d0b4090-client-ca\") pod \"route-controller-manager-5d96b79b56-6lqgc\" (UID: \"0acd81ee-5493-4083-9dd1-751d8d0b4090\") " pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-6lqgc" Feb 20 00:14:26 crc kubenswrapper[5107]: I0220 00:14:26.916678 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0acd81ee-5493-4083-9dd1-751d8d0b4090-serving-cert\") pod \"route-controller-manager-5d96b79b56-6lqgc\" (UID: \"0acd81ee-5493-4083-9dd1-751d8d0b4090\") " pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-6lqgc" Feb 20 00:14:26 crc kubenswrapper[5107]: I0220 00:14:26.916742 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0acd81ee-5493-4083-9dd1-751d8d0b4090-config\") pod \"route-controller-manager-5d96b79b56-6lqgc\" (UID: \"0acd81ee-5493-4083-9dd1-751d8d0b4090\") " pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-6lqgc" Feb 20 00:14:26 crc kubenswrapper[5107]: I0220 00:14:26.916885 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n7jsk\" (UniqueName: \"kubernetes.io/projected/53af46bb-34e6-406b-b2b2-6745d2b0263c-kube-api-access-n7jsk\") on node \"crc\" DevicePath \"\"" Feb 20 00:14:26 crc kubenswrapper[5107]: I0220 00:14:26.916906 5107 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53af46bb-34e6-406b-b2b2-6745d2b0263c-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:14:26 crc kubenswrapper[5107]: I0220 00:14:26.916919 5107 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53af46bb-34e6-406b-b2b2-6745d2b0263c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:14:26 crc kubenswrapper[5107]: I0220 00:14:26.916930 5107 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/53af46bb-34e6-406b-b2b2-6745d2b0263c-tmp\") on node \"crc\" DevicePath \"\"" Feb 20 00:14:26 crc kubenswrapper[5107]: I0220 00:14:26.916941 5107 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53af46bb-34e6-406b-b2b2-6745d2b0263c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:14:27 crc kubenswrapper[5107]: I0220 00:14:27.026946 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0acd81ee-5493-4083-9dd1-751d8d0b4090-serving-cert\") pod \"route-controller-manager-5d96b79b56-6lqgc\" (UID: \"0acd81ee-5493-4083-9dd1-751d8d0b4090\") " pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-6lqgc" Feb 20 00:14:27 crc kubenswrapper[5107]: I0220 00:14:27.027044 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0acd81ee-5493-4083-9dd1-751d8d0b4090-config\") pod \"route-controller-manager-5d96b79b56-6lqgc\" (UID: \"0acd81ee-5493-4083-9dd1-751d8d0b4090\") " pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-6lqgc" Feb 20 00:14:27 crc kubenswrapper[5107]: I0220 00:14:27.027171 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xtdts\" (UniqueName: \"kubernetes.io/projected/0acd81ee-5493-4083-9dd1-751d8d0b4090-kube-api-access-xtdts\") pod \"route-controller-manager-5d96b79b56-6lqgc\" (UID: \"0acd81ee-5493-4083-9dd1-751d8d0b4090\") " pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-6lqgc" Feb 20 00:14:27 crc kubenswrapper[5107]: I0220 00:14:27.027216 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0acd81ee-5493-4083-9dd1-751d8d0b4090-tmp\") pod \"route-controller-manager-5d96b79b56-6lqgc\" (UID: \"0acd81ee-5493-4083-9dd1-751d8d0b4090\") " pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-6lqgc" Feb 20 00:14:27 crc kubenswrapper[5107]: I0220 00:14:27.027284 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0acd81ee-5493-4083-9dd1-751d8d0b4090-client-ca\") pod \"route-controller-manager-5d96b79b56-6lqgc\" (UID: \"0acd81ee-5493-4083-9dd1-751d8d0b4090\") " pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-6lqgc" Feb 20 00:14:27 crc kubenswrapper[5107]: I0220 00:14:27.028808 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0acd81ee-5493-4083-9dd1-751d8d0b4090-client-ca\") pod \"route-controller-manager-5d96b79b56-6lqgc\" (UID: \"0acd81ee-5493-4083-9dd1-751d8d0b4090\") " pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-6lqgc" Feb 20 00:14:27 crc kubenswrapper[5107]: I0220 00:14:27.031664 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0acd81ee-5493-4083-9dd1-751d8d0b4090-config\") pod \"route-controller-manager-5d96b79b56-6lqgc\" (UID: \"0acd81ee-5493-4083-9dd1-751d8d0b4090\") " pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-6lqgc" Feb 20 00:14:27 crc kubenswrapper[5107]: I0220 00:14:27.038180 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0acd81ee-5493-4083-9dd1-751d8d0b4090-serving-cert\") pod \"route-controller-manager-5d96b79b56-6lqgc\" (UID: \"0acd81ee-5493-4083-9dd1-751d8d0b4090\") " pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-6lqgc" Feb 20 00:14:27 crc kubenswrapper[5107]: I0220 00:14:27.038393 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0acd81ee-5493-4083-9dd1-751d8d0b4090-tmp\") pod \"route-controller-manager-5d96b79b56-6lqgc\" (UID: \"0acd81ee-5493-4083-9dd1-751d8d0b4090\") " pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-6lqgc" Feb 20 00:14:27 crc kubenswrapper[5107]: I0220 00:14:27.058528 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtdts\" (UniqueName: \"kubernetes.io/projected/0acd81ee-5493-4083-9dd1-751d8d0b4090-kube-api-access-xtdts\") pod \"route-controller-manager-5d96b79b56-6lqgc\" (UID: \"0acd81ee-5493-4083-9dd1-751d8d0b4090\") " pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-6lqgc" Feb 20 00:14:27 crc kubenswrapper[5107]: I0220 00:14:27.090960 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-6lqgc" Feb 20 00:14:27 crc kubenswrapper[5107]: I0220 00:14:27.267343 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-tc95p" Feb 20 00:14:27 crc kubenswrapper[5107]: I0220 00:14:27.327923 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-7txlk"] Feb 20 00:14:27 crc kubenswrapper[5107]: I0220 00:14:27.383274 5107 generic.go:358] "Generic (PLEG): container finished" podID="53af46bb-34e6-406b-b2b2-6745d2b0263c" containerID="d4b532e33e715fb96ee7982c6905417d0d82c4c83d33156397252032a35c3870" exitCode=0 Feb 20 00:14:27 crc kubenswrapper[5107]: I0220 00:14:27.383624 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84d6978b89-fcwdd" event={"ID":"53af46bb-34e6-406b-b2b2-6745d2b0263c","Type":"ContainerDied","Data":"d4b532e33e715fb96ee7982c6905417d0d82c4c83d33156397252032a35c3870"} Feb 20 00:14:27 crc kubenswrapper[5107]: I0220 00:14:27.383653 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84d6978b89-fcwdd" event={"ID":"53af46bb-34e6-406b-b2b2-6745d2b0263c","Type":"ContainerDied","Data":"71f6abb788918f621ef092d48ecfc559d9a23d592170fb43c034a6c5a03a6bd7"} Feb 20 00:14:27 crc kubenswrapper[5107]: I0220 00:14:27.383670 5107 scope.go:117] "RemoveContainer" containerID="d4b532e33e715fb96ee7982c6905417d0d82c4c83d33156397252032a35c3870" Feb 20 00:14:27 crc kubenswrapper[5107]: I0220 00:14:27.383818 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84d6978b89-fcwdd" Feb 20 00:14:27 crc kubenswrapper[5107]: I0220 00:14:27.385337 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d96b79b56-6lqgc"] Feb 20 00:14:27 crc kubenswrapper[5107]: I0220 00:14:27.414515 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84d6978b89-fcwdd"] Feb 20 00:14:27 crc kubenswrapper[5107]: I0220 00:14:27.415982 5107 scope.go:117] "RemoveContainer" containerID="d4b532e33e715fb96ee7982c6905417d0d82c4c83d33156397252032a35c3870" Feb 20 00:14:27 crc kubenswrapper[5107]: E0220 00:14:27.416461 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4b532e33e715fb96ee7982c6905417d0d82c4c83d33156397252032a35c3870\": container with ID starting with d4b532e33e715fb96ee7982c6905417d0d82c4c83d33156397252032a35c3870 not found: ID does not exist" containerID="d4b532e33e715fb96ee7982c6905417d0d82c4c83d33156397252032a35c3870" Feb 20 00:14:27 crc kubenswrapper[5107]: I0220 00:14:27.416501 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4b532e33e715fb96ee7982c6905417d0d82c4c83d33156397252032a35c3870"} err="failed to get container status \"d4b532e33e715fb96ee7982c6905417d0d82c4c83d33156397252032a35c3870\": rpc error: code = NotFound desc = could not find container \"d4b532e33e715fb96ee7982c6905417d0d82c4c83d33156397252032a35c3870\": container with ID starting with d4b532e33e715fb96ee7982c6905417d0d82c4c83d33156397252032a35c3870 not found: ID does not exist" Feb 20 00:14:27 crc kubenswrapper[5107]: I0220 00:14:27.422091 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84d6978b89-fcwdd"] Feb 20 00:14:28 crc kubenswrapper[5107]: I0220 00:14:28.391282 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-6lqgc" event={"ID":"0acd81ee-5493-4083-9dd1-751d8d0b4090","Type":"ContainerStarted","Data":"bbb0f83fbdf84ec9077bbd45ef1b681b6eef8dd80b40c707823923c96d7ab1a4"} Feb 20 00:14:28 crc kubenswrapper[5107]: I0220 00:14:28.391599 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-6lqgc" event={"ID":"0acd81ee-5493-4083-9dd1-751d8d0b4090","Type":"ContainerStarted","Data":"04f2a53db485c57603a26cb1f4f5a03b67e7375234210ece3934ba5af076a04e"} Feb 20 00:14:28 crc kubenswrapper[5107]: I0220 00:14:28.391620 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-6lqgc" Feb 20 00:14:28 crc kubenswrapper[5107]: I0220 00:14:28.411311 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-6lqgc" podStartSLOduration=2.411290922 podStartE2EDuration="2.411290922s" podCreationTimestamp="2026-02-20 00:14:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:14:28.410015936 +0000 UTC m=+354.778673502" watchObservedRunningTime="2026-02-20 00:14:28.411290922 +0000 UTC m=+354.779948498" Feb 20 00:14:28 crc kubenswrapper[5107]: I0220 00:14:28.442928 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5d96b79b56-6lqgc" Feb 20 00:14:28 crc kubenswrapper[5107]: I0220 00:14:28.493295 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53af46bb-34e6-406b-b2b2-6745d2b0263c" path="/var/lib/kubelet/pods/53af46bb-34e6-406b-b2b2-6745d2b0263c/volumes" Feb 20 00:14:52 crc kubenswrapper[5107]: I0220 00:14:52.380505 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-7txlk" podUID="f9644e65-d917-4c28-a428-743979d10f4e" containerName="registry" containerID="cri-o://1182ac97b3ac52ce2cff91a2cc1d46c8e783b1d179b3e49ea783180dfd44abce" gracePeriod=30 Feb 20 00:14:53 crc kubenswrapper[5107]: I0220 00:14:53.479948 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:14:53 crc kubenswrapper[5107]: I0220 00:14:53.556304 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9644e65-d917-4c28-a428-743979d10f4e-registry-tls\") pod \"f9644e65-d917-4c28-a428-743979d10f4e\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " Feb 20 00:14:53 crc kubenswrapper[5107]: I0220 00:14:53.556407 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9644e65-d917-4c28-a428-743979d10f4e-registry-certificates\") pod \"f9644e65-d917-4c28-a428-743979d10f4e\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " Feb 20 00:14:53 crc kubenswrapper[5107]: I0220 00:14:53.556701 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"f9644e65-d917-4c28-a428-743979d10f4e\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " Feb 20 00:14:53 crc kubenswrapper[5107]: I0220 00:14:53.556783 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9644e65-d917-4c28-a428-743979d10f4e-ca-trust-extracted\") pod \"f9644e65-d917-4c28-a428-743979d10f4e\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " Feb 20 00:14:53 crc kubenswrapper[5107]: I0220 00:14:53.556857 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9644e65-d917-4c28-a428-743979d10f4e-installation-pull-secrets\") pod \"f9644e65-d917-4c28-a428-743979d10f4e\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " Feb 20 00:14:53 crc kubenswrapper[5107]: I0220 00:14:53.556887 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkcd6\" (UniqueName: \"kubernetes.io/projected/f9644e65-d917-4c28-a428-743979d10f4e-kube-api-access-xkcd6\") pod \"f9644e65-d917-4c28-a428-743979d10f4e\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " Feb 20 00:14:53 crc kubenswrapper[5107]: I0220 00:14:53.556983 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9644e65-d917-4c28-a428-743979d10f4e-bound-sa-token\") pod \"f9644e65-d917-4c28-a428-743979d10f4e\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " Feb 20 00:14:53 crc kubenswrapper[5107]: I0220 00:14:53.557020 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9644e65-d917-4c28-a428-743979d10f4e-trusted-ca\") pod \"f9644e65-d917-4c28-a428-743979d10f4e\" (UID: \"f9644e65-d917-4c28-a428-743979d10f4e\") " Feb 20 00:14:53 crc kubenswrapper[5107]: I0220 00:14:53.560448 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9644e65-d917-4c28-a428-743979d10f4e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "f9644e65-d917-4c28-a428-743979d10f4e" (UID: "f9644e65-d917-4c28-a428-743979d10f4e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:14:53 crc kubenswrapper[5107]: I0220 00:14:53.558322 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9644e65-d917-4c28-a428-743979d10f4e-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "f9644e65-d917-4c28-a428-743979d10f4e" (UID: "f9644e65-d917-4c28-a428-743979d10f4e"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:14:53 crc kubenswrapper[5107]: I0220 00:14:53.564783 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9644e65-d917-4c28-a428-743979d10f4e-kube-api-access-xkcd6" (OuterVolumeSpecName: "kube-api-access-xkcd6") pod "f9644e65-d917-4c28-a428-743979d10f4e" (UID: "f9644e65-d917-4c28-a428-743979d10f4e"). InnerVolumeSpecName "kube-api-access-xkcd6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:14:53 crc kubenswrapper[5107]: I0220 00:14:53.566074 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9644e65-d917-4c28-a428-743979d10f4e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "f9644e65-d917-4c28-a428-743979d10f4e" (UID: "f9644e65-d917-4c28-a428-743979d10f4e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:14:53 crc kubenswrapper[5107]: I0220 00:14:53.566310 5107 generic.go:358] "Generic (PLEG): container finished" podID="f9644e65-d917-4c28-a428-743979d10f4e" containerID="1182ac97b3ac52ce2cff91a2cc1d46c8e783b1d179b3e49ea783180dfd44abce" exitCode=0 Feb 20 00:14:53 crc kubenswrapper[5107]: I0220 00:14:53.566398 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-7txlk" Feb 20 00:14:53 crc kubenswrapper[5107]: I0220 00:14:53.566560 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-7txlk" event={"ID":"f9644e65-d917-4c28-a428-743979d10f4e","Type":"ContainerDied","Data":"1182ac97b3ac52ce2cff91a2cc1d46c8e783b1d179b3e49ea783180dfd44abce"} Feb 20 00:14:53 crc kubenswrapper[5107]: I0220 00:14:53.566619 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-7txlk" event={"ID":"f9644e65-d917-4c28-a428-743979d10f4e","Type":"ContainerDied","Data":"54a9b420f99e36250b15c08fc4eda098e026f9aad9d1ef5b7977edfab5c43adb"} Feb 20 00:14:53 crc kubenswrapper[5107]: I0220 00:14:53.566645 5107 scope.go:117] "RemoveContainer" containerID="1182ac97b3ac52ce2cff91a2cc1d46c8e783b1d179b3e49ea783180dfd44abce" Feb 20 00:14:53 crc kubenswrapper[5107]: I0220 00:14:53.567367 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9644e65-d917-4c28-a428-743979d10f4e-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "f9644e65-d917-4c28-a428-743979d10f4e" (UID: "f9644e65-d917-4c28-a428-743979d10f4e"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:14:53 crc kubenswrapper[5107]: I0220 00:14:53.568211 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9644e65-d917-4c28-a428-743979d10f4e-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "f9644e65-d917-4c28-a428-743979d10f4e" (UID: "f9644e65-d917-4c28-a428-743979d10f4e"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:14:53 crc kubenswrapper[5107]: I0220 00:14:53.579848 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "f9644e65-d917-4c28-a428-743979d10f4e" (UID: "f9644e65-d917-4c28-a428-743979d10f4e"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Feb 20 00:14:53 crc kubenswrapper[5107]: I0220 00:14:53.594303 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9644e65-d917-4c28-a428-743979d10f4e-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "f9644e65-d917-4c28-a428-743979d10f4e" (UID: "f9644e65-d917-4c28-a428-743979d10f4e"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:14:53 crc kubenswrapper[5107]: I0220 00:14:53.637757 5107 scope.go:117] "RemoveContainer" containerID="1182ac97b3ac52ce2cff91a2cc1d46c8e783b1d179b3e49ea783180dfd44abce" Feb 20 00:14:53 crc kubenswrapper[5107]: E0220 00:14:53.638580 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1182ac97b3ac52ce2cff91a2cc1d46c8e783b1d179b3e49ea783180dfd44abce\": container with ID starting with 1182ac97b3ac52ce2cff91a2cc1d46c8e783b1d179b3e49ea783180dfd44abce not found: ID does not exist" containerID="1182ac97b3ac52ce2cff91a2cc1d46c8e783b1d179b3e49ea783180dfd44abce" Feb 20 00:14:53 crc kubenswrapper[5107]: I0220 00:14:53.638626 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1182ac97b3ac52ce2cff91a2cc1d46c8e783b1d179b3e49ea783180dfd44abce"} err="failed to get container status \"1182ac97b3ac52ce2cff91a2cc1d46c8e783b1d179b3e49ea783180dfd44abce\": rpc error: code = NotFound desc = could not find container \"1182ac97b3ac52ce2cff91a2cc1d46c8e783b1d179b3e49ea783180dfd44abce\": container with ID starting with 1182ac97b3ac52ce2cff91a2cc1d46c8e783b1d179b3e49ea783180dfd44abce not found: ID does not exist" Feb 20 00:14:53 crc kubenswrapper[5107]: I0220 00:14:53.658472 5107 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9644e65-d917-4c28-a428-743979d10f4e-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 20 00:14:53 crc kubenswrapper[5107]: I0220 00:14:53.658513 5107 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9644e65-d917-4c28-a428-743979d10f4e-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 20 00:14:53 crc kubenswrapper[5107]: I0220 00:14:53.658533 5107 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9644e65-d917-4c28-a428-743979d10f4e-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 20 00:14:53 crc kubenswrapper[5107]: I0220 00:14:53.658546 5107 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9644e65-d917-4c28-a428-743979d10f4e-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 20 00:14:53 crc kubenswrapper[5107]: I0220 00:14:53.658558 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xkcd6\" (UniqueName: \"kubernetes.io/projected/f9644e65-d917-4c28-a428-743979d10f4e-kube-api-access-xkcd6\") on node \"crc\" DevicePath \"\"" Feb 20 00:14:53 crc kubenswrapper[5107]: I0220 00:14:53.658568 5107 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9644e65-d917-4c28-a428-743979d10f4e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 20 00:14:53 crc kubenswrapper[5107]: I0220 00:14:53.658578 5107 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9644e65-d917-4c28-a428-743979d10f4e-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:14:53 crc kubenswrapper[5107]: I0220 00:14:53.915823 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-7txlk"] Feb 20 00:14:53 crc kubenswrapper[5107]: I0220 00:14:53.922892 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-7txlk"] Feb 20 00:14:54 crc kubenswrapper[5107]: I0220 00:14:54.495713 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9644e65-d917-4c28-a428-743979d10f4e" path="/var/lib/kubelet/pods/f9644e65-d917-4c28-a428-743979d10f4e/volumes" Feb 20 00:15:00 crc kubenswrapper[5107]: I0220 00:15:00.203085 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525775-78h5d"] Feb 20 00:15:00 crc kubenswrapper[5107]: I0220 00:15:00.205315 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f9644e65-d917-4c28-a428-743979d10f4e" containerName="registry" Feb 20 00:15:00 crc kubenswrapper[5107]: I0220 00:15:00.205352 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9644e65-d917-4c28-a428-743979d10f4e" containerName="registry" Feb 20 00:15:00 crc kubenswrapper[5107]: I0220 00:15:00.205639 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="f9644e65-d917-4c28-a428-743979d10f4e" containerName="registry" Feb 20 00:15:00 crc kubenswrapper[5107]: I0220 00:15:00.218912 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525775-78h5d" Feb 20 00:15:00 crc kubenswrapper[5107]: I0220 00:15:00.221954 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Feb 20 00:15:00 crc kubenswrapper[5107]: I0220 00:15:00.222013 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Feb 20 00:15:00 crc kubenswrapper[5107]: I0220 00:15:00.226189 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525775-78h5d"] Feb 20 00:15:00 crc kubenswrapper[5107]: I0220 00:15:00.364078 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njsvc\" (UniqueName: \"kubernetes.io/projected/d1b7c135-ca87-474f-b765-e6af016c121a-kube-api-access-njsvc\") pod \"collect-profiles-29525775-78h5d\" (UID: \"d1b7c135-ca87-474f-b765-e6af016c121a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525775-78h5d" Feb 20 00:15:00 crc kubenswrapper[5107]: I0220 00:15:00.364342 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d1b7c135-ca87-474f-b765-e6af016c121a-secret-volume\") pod \"collect-profiles-29525775-78h5d\" (UID: \"d1b7c135-ca87-474f-b765-e6af016c121a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525775-78h5d" Feb 20 00:15:00 crc kubenswrapper[5107]: I0220 00:15:00.364396 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1b7c135-ca87-474f-b765-e6af016c121a-config-volume\") pod \"collect-profiles-29525775-78h5d\" (UID: \"d1b7c135-ca87-474f-b765-e6af016c121a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525775-78h5d" Feb 20 00:15:00 crc kubenswrapper[5107]: I0220 00:15:00.465888 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-njsvc\" (UniqueName: \"kubernetes.io/projected/d1b7c135-ca87-474f-b765-e6af016c121a-kube-api-access-njsvc\") pod \"collect-profiles-29525775-78h5d\" (UID: \"d1b7c135-ca87-474f-b765-e6af016c121a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525775-78h5d" Feb 20 00:15:00 crc kubenswrapper[5107]: I0220 00:15:00.466033 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d1b7c135-ca87-474f-b765-e6af016c121a-secret-volume\") pod \"collect-profiles-29525775-78h5d\" (UID: \"d1b7c135-ca87-474f-b765-e6af016c121a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525775-78h5d" Feb 20 00:15:00 crc kubenswrapper[5107]: I0220 00:15:00.466070 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1b7c135-ca87-474f-b765-e6af016c121a-config-volume\") pod \"collect-profiles-29525775-78h5d\" (UID: \"d1b7c135-ca87-474f-b765-e6af016c121a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525775-78h5d" Feb 20 00:15:00 crc kubenswrapper[5107]: I0220 00:15:00.467965 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1b7c135-ca87-474f-b765-e6af016c121a-config-volume\") pod \"collect-profiles-29525775-78h5d\" (UID: \"d1b7c135-ca87-474f-b765-e6af016c121a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525775-78h5d" Feb 20 00:15:00 crc kubenswrapper[5107]: I0220 00:15:00.475222 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d1b7c135-ca87-474f-b765-e6af016c121a-secret-volume\") pod \"collect-profiles-29525775-78h5d\" (UID: \"d1b7c135-ca87-474f-b765-e6af016c121a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525775-78h5d" Feb 20 00:15:00 crc kubenswrapper[5107]: I0220 00:15:00.493037 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-njsvc\" (UniqueName: \"kubernetes.io/projected/d1b7c135-ca87-474f-b765-e6af016c121a-kube-api-access-njsvc\") pod \"collect-profiles-29525775-78h5d\" (UID: \"d1b7c135-ca87-474f-b765-e6af016c121a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525775-78h5d" Feb 20 00:15:00 crc kubenswrapper[5107]: I0220 00:15:00.535869 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525775-78h5d" Feb 20 00:15:00 crc kubenswrapper[5107]: I0220 00:15:00.783649 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525775-78h5d"] Feb 20 00:15:01 crc kubenswrapper[5107]: I0220 00:15:01.625389 5107 generic.go:358] "Generic (PLEG): container finished" podID="d1b7c135-ca87-474f-b765-e6af016c121a" containerID="c2568a990818af749627c63bc39d95a93a12412919bb779fadd6d3c4c6021278" exitCode=0 Feb 20 00:15:01 crc kubenswrapper[5107]: I0220 00:15:01.625485 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525775-78h5d" event={"ID":"d1b7c135-ca87-474f-b765-e6af016c121a","Type":"ContainerDied","Data":"c2568a990818af749627c63bc39d95a93a12412919bb779fadd6d3c4c6021278"} Feb 20 00:15:01 crc kubenswrapper[5107]: I0220 00:15:01.625791 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525775-78h5d" event={"ID":"d1b7c135-ca87-474f-b765-e6af016c121a","Type":"ContainerStarted","Data":"d5d79dbcf0a2b50177968e842177a45b17d0b6130c0c57f65a3a2676932c34a5"} Feb 20 00:15:02 crc kubenswrapper[5107]: I0220 00:15:02.958915 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525775-78h5d" Feb 20 00:15:03 crc kubenswrapper[5107]: I0220 00:15:03.107501 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d1b7c135-ca87-474f-b765-e6af016c121a-secret-volume\") pod \"d1b7c135-ca87-474f-b765-e6af016c121a\" (UID: \"d1b7c135-ca87-474f-b765-e6af016c121a\") " Feb 20 00:15:03 crc kubenswrapper[5107]: I0220 00:15:03.107566 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1b7c135-ca87-474f-b765-e6af016c121a-config-volume\") pod \"d1b7c135-ca87-474f-b765-e6af016c121a\" (UID: \"d1b7c135-ca87-474f-b765-e6af016c121a\") " Feb 20 00:15:03 crc kubenswrapper[5107]: I0220 00:15:03.107675 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njsvc\" (UniqueName: \"kubernetes.io/projected/d1b7c135-ca87-474f-b765-e6af016c121a-kube-api-access-njsvc\") pod \"d1b7c135-ca87-474f-b765-e6af016c121a\" (UID: \"d1b7c135-ca87-474f-b765-e6af016c121a\") " Feb 20 00:15:03 crc kubenswrapper[5107]: I0220 00:15:03.109048 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1b7c135-ca87-474f-b765-e6af016c121a-config-volume" (OuterVolumeSpecName: "config-volume") pod "d1b7c135-ca87-474f-b765-e6af016c121a" (UID: "d1b7c135-ca87-474f-b765-e6af016c121a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:15:03 crc kubenswrapper[5107]: I0220 00:15:03.117262 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1b7c135-ca87-474f-b765-e6af016c121a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d1b7c135-ca87-474f-b765-e6af016c121a" (UID: "d1b7c135-ca87-474f-b765-e6af016c121a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:15:03 crc kubenswrapper[5107]: I0220 00:15:03.118042 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1b7c135-ca87-474f-b765-e6af016c121a-kube-api-access-njsvc" (OuterVolumeSpecName: "kube-api-access-njsvc") pod "d1b7c135-ca87-474f-b765-e6af016c121a" (UID: "d1b7c135-ca87-474f-b765-e6af016c121a"). InnerVolumeSpecName "kube-api-access-njsvc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:15:03 crc kubenswrapper[5107]: I0220 00:15:03.209284 5107 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d1b7c135-ca87-474f-b765-e6af016c121a-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 20 00:15:03 crc kubenswrapper[5107]: I0220 00:15:03.209338 5107 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1b7c135-ca87-474f-b765-e6af016c121a-config-volume\") on node \"crc\" DevicePath \"\"" Feb 20 00:15:03 crc kubenswrapper[5107]: I0220 00:15:03.209357 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-njsvc\" (UniqueName: \"kubernetes.io/projected/d1b7c135-ca87-474f-b765-e6af016c121a-kube-api-access-njsvc\") on node \"crc\" DevicePath \"\"" Feb 20 00:15:03 crc kubenswrapper[5107]: I0220 00:15:03.646500 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525775-78h5d" event={"ID":"d1b7c135-ca87-474f-b765-e6af016c121a","Type":"ContainerDied","Data":"d5d79dbcf0a2b50177968e842177a45b17d0b6130c0c57f65a3a2676932c34a5"} Feb 20 00:15:03 crc kubenswrapper[5107]: I0220 00:15:03.646552 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5d79dbcf0a2b50177968e842177a45b17d0b6130c0c57f65a3a2676932c34a5" Feb 20 00:15:03 crc kubenswrapper[5107]: I0220 00:15:03.646684 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525775-78h5d" Feb 20 00:16:00 crc kubenswrapper[5107]: I0220 00:16:00.127279 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29525776-ccd67"] Feb 20 00:16:00 crc kubenswrapper[5107]: I0220 00:16:00.128295 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d1b7c135-ca87-474f-b765-e6af016c121a" containerName="collect-profiles" Feb 20 00:16:00 crc kubenswrapper[5107]: I0220 00:16:00.128308 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1b7c135-ca87-474f-b765-e6af016c121a" containerName="collect-profiles" Feb 20 00:16:00 crc kubenswrapper[5107]: I0220 00:16:00.128485 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="d1b7c135-ca87-474f-b765-e6af016c121a" containerName="collect-profiles" Feb 20 00:16:00 crc kubenswrapper[5107]: I0220 00:16:00.192981 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525776-ccd67"] Feb 20 00:16:00 crc kubenswrapper[5107]: I0220 00:16:00.193135 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525776-ccd67" Feb 20 00:16:00 crc kubenswrapper[5107]: I0220 00:16:00.196941 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 20 00:16:00 crc kubenswrapper[5107]: I0220 00:16:00.196976 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 20 00:16:00 crc kubenswrapper[5107]: I0220 00:16:00.273979 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2mmt\" (UniqueName: \"kubernetes.io/projected/6635f88f-79b8-4f8a-a478-c70d4c3b88ff-kube-api-access-t2mmt\") pod \"auto-csr-approver-29525776-ccd67\" (UID: \"6635f88f-79b8-4f8a-a478-c70d4c3b88ff\") " pod="openshift-infra/auto-csr-approver-29525776-ccd67" Feb 20 00:16:00 crc kubenswrapper[5107]: I0220 00:16:00.376077 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t2mmt\" (UniqueName: \"kubernetes.io/projected/6635f88f-79b8-4f8a-a478-c70d4c3b88ff-kube-api-access-t2mmt\") pod \"auto-csr-approver-29525776-ccd67\" (UID: \"6635f88f-79b8-4f8a-a478-c70d4c3b88ff\") " pod="openshift-infra/auto-csr-approver-29525776-ccd67" Feb 20 00:16:00 crc kubenswrapper[5107]: I0220 00:16:00.411529 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2mmt\" (UniqueName: \"kubernetes.io/projected/6635f88f-79b8-4f8a-a478-c70d4c3b88ff-kube-api-access-t2mmt\") pod \"auto-csr-approver-29525776-ccd67\" (UID: \"6635f88f-79b8-4f8a-a478-c70d4c3b88ff\") " pod="openshift-infra/auto-csr-approver-29525776-ccd67" Feb 20 00:16:00 crc kubenswrapper[5107]: I0220 00:16:00.517279 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525776-ccd67" Feb 20 00:16:00 crc kubenswrapper[5107]: I0220 00:16:00.993687 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525776-ccd67"] Feb 20 00:16:01 crc kubenswrapper[5107]: W0220 00:16:01.004569 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6635f88f_79b8_4f8a_a478_c70d4c3b88ff.slice/crio-421b1581ae613e340cb08d66b39131f6bb931235609cec017fe1b796ecdf2c98 WatchSource:0}: Error finding container 421b1581ae613e340cb08d66b39131f6bb931235609cec017fe1b796ecdf2c98: Status 404 returned error can't find the container with id 421b1581ae613e340cb08d66b39131f6bb931235609cec017fe1b796ecdf2c98 Feb 20 00:16:01 crc kubenswrapper[5107]: I0220 00:16:01.034999 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525776-ccd67" event={"ID":"6635f88f-79b8-4f8a-a478-c70d4c3b88ff","Type":"ContainerStarted","Data":"421b1581ae613e340cb08d66b39131f6bb931235609cec017fe1b796ecdf2c98"} Feb 20 00:16:02 crc kubenswrapper[5107]: I0220 00:16:02.824130 5107 patch_prober.go:28] interesting pod/machine-config-daemon-5bqkx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:16:02 crc kubenswrapper[5107]: I0220 00:16:02.824243 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:16:04 crc kubenswrapper[5107]: I0220 00:16:04.051545 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525776-ccd67" event={"ID":"6635f88f-79b8-4f8a-a478-c70d4c3b88ff","Type":"ContainerStarted","Data":"224390b7f1c9d11a0037e1fe378c663a18c1b5ce00d89266197d92fa2ecd5883"} Feb 20 00:16:04 crc kubenswrapper[5107]: I0220 00:16:04.277304 5107 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-sg475" Feb 20 00:16:04 crc kubenswrapper[5107]: I0220 00:16:04.300060 5107 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-sg475" Feb 20 00:16:05 crc kubenswrapper[5107]: I0220 00:16:05.061322 5107 generic.go:358] "Generic (PLEG): container finished" podID="6635f88f-79b8-4f8a-a478-c70d4c3b88ff" containerID="224390b7f1c9d11a0037e1fe378c663a18c1b5ce00d89266197d92fa2ecd5883" exitCode=0 Feb 20 00:16:05 crc kubenswrapper[5107]: I0220 00:16:05.061384 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525776-ccd67" event={"ID":"6635f88f-79b8-4f8a-a478-c70d4c3b88ff","Type":"ContainerDied","Data":"224390b7f1c9d11a0037e1fe378c663a18c1b5ce00d89266197d92fa2ecd5883"} Feb 20 00:16:05 crc kubenswrapper[5107]: I0220 00:16:05.302106 5107 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-03-22 00:11:04 +0000 UTC" deadline="2026-03-17 14:33:26.922839868 +0000 UTC" Feb 20 00:16:05 crc kubenswrapper[5107]: I0220 00:16:05.302161 5107 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="614h17m21.620682848s" Feb 20 00:16:06 crc kubenswrapper[5107]: I0220 00:16:06.302747 5107 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-03-22 00:11:04 +0000 UTC" deadline="2026-03-18 15:16:42.212054439 +0000 UTC" Feb 20 00:16:06 crc kubenswrapper[5107]: I0220 00:16:06.303870 5107 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="639h0m35.908191253s" Feb 20 00:16:06 crc kubenswrapper[5107]: I0220 00:16:06.358694 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525776-ccd67" Feb 20 00:16:06 crc kubenswrapper[5107]: I0220 00:16:06.466518 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2mmt\" (UniqueName: \"kubernetes.io/projected/6635f88f-79b8-4f8a-a478-c70d4c3b88ff-kube-api-access-t2mmt\") pod \"6635f88f-79b8-4f8a-a478-c70d4c3b88ff\" (UID: \"6635f88f-79b8-4f8a-a478-c70d4c3b88ff\") " Feb 20 00:16:06 crc kubenswrapper[5107]: I0220 00:16:06.477279 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6635f88f-79b8-4f8a-a478-c70d4c3b88ff-kube-api-access-t2mmt" (OuterVolumeSpecName: "kube-api-access-t2mmt") pod "6635f88f-79b8-4f8a-a478-c70d4c3b88ff" (UID: "6635f88f-79b8-4f8a-a478-c70d4c3b88ff"). InnerVolumeSpecName "kube-api-access-t2mmt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:16:06 crc kubenswrapper[5107]: I0220 00:16:06.569428 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t2mmt\" (UniqueName: \"kubernetes.io/projected/6635f88f-79b8-4f8a-a478-c70d4c3b88ff-kube-api-access-t2mmt\") on node \"crc\" DevicePath \"\"" Feb 20 00:16:07 crc kubenswrapper[5107]: I0220 00:16:07.079515 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525776-ccd67" event={"ID":"6635f88f-79b8-4f8a-a478-c70d4c3b88ff","Type":"ContainerDied","Data":"421b1581ae613e340cb08d66b39131f6bb931235609cec017fe1b796ecdf2c98"} Feb 20 00:16:07 crc kubenswrapper[5107]: I0220 00:16:07.079554 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="421b1581ae613e340cb08d66b39131f6bb931235609cec017fe1b796ecdf2c98" Feb 20 00:16:07 crc kubenswrapper[5107]: I0220 00:16:07.079978 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525776-ccd67" Feb 20 00:16:32 crc kubenswrapper[5107]: I0220 00:16:32.824906 5107 patch_prober.go:28] interesting pod/machine-config-daemon-5bqkx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:16:32 crc kubenswrapper[5107]: I0220 00:16:32.826392 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:17:02 crc kubenswrapper[5107]: I0220 00:17:02.824959 5107 patch_prober.go:28] interesting pod/machine-config-daemon-5bqkx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:17:02 crc kubenswrapper[5107]: I0220 00:17:02.825837 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:17:02 crc kubenswrapper[5107]: I0220 00:17:02.825908 5107 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" Feb 20 00:17:02 crc kubenswrapper[5107]: I0220 00:17:02.826780 5107 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b9df3b3da41a36f1c87d440f95abb0b94cf412d0091dd42441acad625411180c"} pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 20 00:17:02 crc kubenswrapper[5107]: I0220 00:17:02.826910 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" containerName="machine-config-daemon" containerID="cri-o://b9df3b3da41a36f1c87d440f95abb0b94cf412d0091dd42441acad625411180c" gracePeriod=600 Feb 20 00:17:03 crc kubenswrapper[5107]: I0220 00:17:03.495379 5107 generic.go:358] "Generic (PLEG): container finished" podID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" containerID="b9df3b3da41a36f1c87d440f95abb0b94cf412d0091dd42441acad625411180c" exitCode=0 Feb 20 00:17:03 crc kubenswrapper[5107]: I0220 00:17:03.495486 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" event={"ID":"2a8cc693-438e-4d3b-8865-7d3907f9dc78","Type":"ContainerDied","Data":"b9df3b3da41a36f1c87d440f95abb0b94cf412d0091dd42441acad625411180c"} Feb 20 00:17:03 crc kubenswrapper[5107]: I0220 00:17:03.496037 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" event={"ID":"2a8cc693-438e-4d3b-8865-7d3907f9dc78","Type":"ContainerStarted","Data":"bb20a5be1ae88e4e4d0571e4849fdfa6beddb51816cd34f7807146b41b9e36ee"} Feb 20 00:17:03 crc kubenswrapper[5107]: I0220 00:17:03.496070 5107 scope.go:117] "RemoveContainer" containerID="a2b0d29d657a8e1e523026507a935569b2cff249c2e3d5743b380396be4cd1c2" Feb 20 00:18:00 crc kubenswrapper[5107]: I0220 00:18:00.139244 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29525778-2l5s2"] Feb 20 00:18:00 crc kubenswrapper[5107]: I0220 00:18:00.140670 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6635f88f-79b8-4f8a-a478-c70d4c3b88ff" containerName="oc" Feb 20 00:18:00 crc kubenswrapper[5107]: I0220 00:18:00.140688 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="6635f88f-79b8-4f8a-a478-c70d4c3b88ff" containerName="oc" Feb 20 00:18:00 crc kubenswrapper[5107]: I0220 00:18:00.140806 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="6635f88f-79b8-4f8a-a478-c70d4c3b88ff" containerName="oc" Feb 20 00:18:00 crc kubenswrapper[5107]: I0220 00:18:00.149345 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525778-2l5s2"] Feb 20 00:18:00 crc kubenswrapper[5107]: I0220 00:18:00.149489 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525778-2l5s2" Feb 20 00:18:00 crc kubenswrapper[5107]: I0220 00:18:00.152319 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 20 00:18:00 crc kubenswrapper[5107]: I0220 00:18:00.152641 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 20 00:18:00 crc kubenswrapper[5107]: I0220 00:18:00.190056 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz7n5\" (UniqueName: \"kubernetes.io/projected/b4d3402d-9de0-434f-9a28-c032250d9161-kube-api-access-cz7n5\") pod \"auto-csr-approver-29525778-2l5s2\" (UID: \"b4d3402d-9de0-434f-9a28-c032250d9161\") " pod="openshift-infra/auto-csr-approver-29525778-2l5s2" Feb 20 00:18:00 crc kubenswrapper[5107]: I0220 00:18:00.291242 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cz7n5\" (UniqueName: \"kubernetes.io/projected/b4d3402d-9de0-434f-9a28-c032250d9161-kube-api-access-cz7n5\") pod \"auto-csr-approver-29525778-2l5s2\" (UID: \"b4d3402d-9de0-434f-9a28-c032250d9161\") " pod="openshift-infra/auto-csr-approver-29525778-2l5s2" Feb 20 00:18:00 crc kubenswrapper[5107]: I0220 00:18:00.323663 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz7n5\" (UniqueName: \"kubernetes.io/projected/b4d3402d-9de0-434f-9a28-c032250d9161-kube-api-access-cz7n5\") pod \"auto-csr-approver-29525778-2l5s2\" (UID: \"b4d3402d-9de0-434f-9a28-c032250d9161\") " pod="openshift-infra/auto-csr-approver-29525778-2l5s2" Feb 20 00:18:00 crc kubenswrapper[5107]: I0220 00:18:00.470700 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525778-2l5s2" Feb 20 00:18:00 crc kubenswrapper[5107]: I0220 00:18:00.693716 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525778-2l5s2"] Feb 20 00:18:00 crc kubenswrapper[5107]: I0220 00:18:00.902334 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525778-2l5s2" event={"ID":"b4d3402d-9de0-434f-9a28-c032250d9161","Type":"ContainerStarted","Data":"3c4d2a674be70fc235d330e778c0b3160a07011f87af387d0a4d50cc2855c087"} Feb 20 00:18:02 crc kubenswrapper[5107]: I0220 00:18:02.919891 5107 generic.go:358] "Generic (PLEG): container finished" podID="b4d3402d-9de0-434f-9a28-c032250d9161" containerID="78a86da67cbb12eed73cba0d469b3fe2d9584f25586e955d3f37b90b5878a7e5" exitCode=0 Feb 20 00:18:02 crc kubenswrapper[5107]: I0220 00:18:02.919980 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525778-2l5s2" event={"ID":"b4d3402d-9de0-434f-9a28-c032250d9161","Type":"ContainerDied","Data":"78a86da67cbb12eed73cba0d469b3fe2d9584f25586e955d3f37b90b5878a7e5"} Feb 20 00:18:04 crc kubenswrapper[5107]: I0220 00:18:04.278102 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525778-2l5s2" Feb 20 00:18:04 crc kubenswrapper[5107]: I0220 00:18:04.345851 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cz7n5\" (UniqueName: \"kubernetes.io/projected/b4d3402d-9de0-434f-9a28-c032250d9161-kube-api-access-cz7n5\") pod \"b4d3402d-9de0-434f-9a28-c032250d9161\" (UID: \"b4d3402d-9de0-434f-9a28-c032250d9161\") " Feb 20 00:18:04 crc kubenswrapper[5107]: I0220 00:18:04.367511 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4d3402d-9de0-434f-9a28-c032250d9161-kube-api-access-cz7n5" (OuterVolumeSpecName: "kube-api-access-cz7n5") pod "b4d3402d-9de0-434f-9a28-c032250d9161" (UID: "b4d3402d-9de0-434f-9a28-c032250d9161"). InnerVolumeSpecName "kube-api-access-cz7n5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:18:04 crc kubenswrapper[5107]: I0220 00:18:04.449186 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cz7n5\" (UniqueName: \"kubernetes.io/projected/b4d3402d-9de0-434f-9a28-c032250d9161-kube-api-access-cz7n5\") on node \"crc\" DevicePath \"\"" Feb 20 00:18:04 crc kubenswrapper[5107]: I0220 00:18:04.936918 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525778-2l5s2" Feb 20 00:18:04 crc kubenswrapper[5107]: I0220 00:18:04.936915 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525778-2l5s2" event={"ID":"b4d3402d-9de0-434f-9a28-c032250d9161","Type":"ContainerDied","Data":"3c4d2a674be70fc235d330e778c0b3160a07011f87af387d0a4d50cc2855c087"} Feb 20 00:18:04 crc kubenswrapper[5107]: I0220 00:18:04.937092 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c4d2a674be70fc235d330e778c0b3160a07011f87af387d0a4d50cc2855c087" Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.327521 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bf9fp"] Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.328392 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bf9fp" podUID="ff511768-9c0a-4c27-a386-24c9cd8c4eac" containerName="kube-rbac-proxy" containerID="cri-o://6a6c2f36d65986162afcce08e460081a513c8f2aef40f460917dec3baefb8189" gracePeriod=30 Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.328778 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bf9fp" podUID="ff511768-9c0a-4c27-a386-24c9cd8c4eac" containerName="ovnkube-cluster-manager" containerID="cri-o://0dd3a7fc07e23f34d434794c443bd144b4c50c362794b29e34d8de15180c4fb0" gracePeriod=30 Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.536605 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bf9fp" Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.554199 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-glc89"] Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.554845 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-glc89" podUID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerName="ovn-controller" containerID="cri-o://a5acd80a95a9f26ca02364882aa271205aa59d202f017a9f2b3e2f03265f438c" gracePeriod=30 Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.554927 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-glc89" podUID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerName="northd" containerID="cri-o://9cd5146ae94c10b4e5bfdd7e8e3bf42cc6d407c2358db371e6e746bbdde1cc93" gracePeriod=30 Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.555003 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-glc89" podUID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://5fb6012486258bbd77a807a27c4c7bb0f8eef75d373a44256da52f1156c64c1d" gracePeriod=30 Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.555082 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-glc89" podUID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerName="kube-rbac-proxy-node" containerID="cri-o://8d37144fd243566e06cd611899402a4dd9aafa7b9bad63dc2877f81d26f164e4" gracePeriod=30 Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.555120 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-glc89" podUID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerName="sbdb" containerID="cri-o://683f7d09537d646287589f9dd2c0eedb6307891e7d1e994a18977f335ed81f35" gracePeriod=30 Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.555135 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-glc89" podUID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerName="ovn-acl-logging" containerID="cri-o://877ba0c2d848e84f3ff10b987145674d55fe60ec255efc10f605f24b98963c96" gracePeriod=30 Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.555191 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-glc89" podUID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerName="nbdb" containerID="cri-o://e4c55497b2c440e7767f3c6602b03f849de2496fa8c6f6649136e91fbadb1b39" gracePeriod=30 Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.582977 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-glc89" podUID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerName="ovnkube-controller" containerID="cri-o://cbb81ffa831c22f321eaa8ce88f7ce0c8849e0f376587a9a80c09113a0c3795b" gracePeriod=30 Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.585477 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-nxstp"] Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.586377 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b4d3402d-9de0-434f-9a28-c032250d9161" containerName="oc" Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.586408 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4d3402d-9de0-434f-9a28-c032250d9161" containerName="oc" Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.586434 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ff511768-9c0a-4c27-a386-24c9cd8c4eac" containerName="ovnkube-cluster-manager" Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.586445 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff511768-9c0a-4c27-a386-24c9cd8c4eac" containerName="ovnkube-cluster-manager" Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.586484 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ff511768-9c0a-4c27-a386-24c9cd8c4eac" containerName="kube-rbac-proxy" Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.586497 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff511768-9c0a-4c27-a386-24c9cd8c4eac" containerName="kube-rbac-proxy" Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.586644 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="ff511768-9c0a-4c27-a386-24c9cd8c4eac" containerName="ovnkube-cluster-manager" Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.586671 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="b4d3402d-9de0-434f-9a28-c032250d9161" containerName="oc" Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.586697 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="ff511768-9c0a-4c27-a386-24c9cd8c4eac" containerName="kube-rbac-proxy" Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.596230 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-nxstp" Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.596587 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ff511768-9c0a-4c27-a386-24c9cd8c4eac-ovn-control-plane-metrics-cert\") pod \"ff511768-9c0a-4c27-a386-24c9cd8c4eac\" (UID: \"ff511768-9c0a-4c27-a386-24c9cd8c4eac\") " Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.596661 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ff511768-9c0a-4c27-a386-24c9cd8c4eac-env-overrides\") pod \"ff511768-9c0a-4c27-a386-24c9cd8c4eac\" (UID: \"ff511768-9c0a-4c27-a386-24c9cd8c4eac\") " Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.596690 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzt7b\" (UniqueName: \"kubernetes.io/projected/ff511768-9c0a-4c27-a386-24c9cd8c4eac-kube-api-access-pzt7b\") pod \"ff511768-9c0a-4c27-a386-24c9cd8c4eac\" (UID: \"ff511768-9c0a-4c27-a386-24c9cd8c4eac\") " Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.596781 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ff511768-9c0a-4c27-a386-24c9cd8c4eac-ovnkube-config\") pod \"ff511768-9c0a-4c27-a386-24c9cd8c4eac\" (UID: \"ff511768-9c0a-4c27-a386-24c9cd8c4eac\") " Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.598236 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff511768-9c0a-4c27-a386-24c9cd8c4eac-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "ff511768-9c0a-4c27-a386-24c9cd8c4eac" (UID: "ff511768-9c0a-4c27-a386-24c9cd8c4eac"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.598521 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff511768-9c0a-4c27-a386-24c9cd8c4eac-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "ff511768-9c0a-4c27-a386-24c9cd8c4eac" (UID: "ff511768-9c0a-4c27-a386-24c9cd8c4eac"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.610749 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff511768-9c0a-4c27-a386-24c9cd8c4eac-kube-api-access-pzt7b" (OuterVolumeSpecName: "kube-api-access-pzt7b") pod "ff511768-9c0a-4c27-a386-24c9cd8c4eac" (UID: "ff511768-9c0a-4c27-a386-24c9cd8c4eac"). InnerVolumeSpecName "kube-api-access-pzt7b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.633938 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff511768-9c0a-4c27-a386-24c9cd8c4eac-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "ff511768-9c0a-4c27-a386-24c9cd8c4eac" (UID: "ff511768-9c0a-4c27-a386-24c9cd8c4eac"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:18:21 crc kubenswrapper[5107]: E0220 00:18:21.651613 5107 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf6b562f_b4f7_400d_b6c2_cf5df40d6eaf.slice/crio-8d37144fd243566e06cd611899402a4dd9aafa7b9bad63dc2877f81d26f164e4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf6b562f_b4f7_400d_b6c2_cf5df40d6eaf.slice/crio-5fb6012486258bbd77a807a27c4c7bb0f8eef75d373a44256da52f1156c64c1d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9d08e95_6328_4e97_aab4_4dd9913914cc.slice/crio-11d929442d04c0332dcbbdccbd6f701427a86529b8e63352dc475bc4e2927653.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf6b562f_b4f7_400d_b6c2_cf5df40d6eaf.slice/crio-conmon-a5acd80a95a9f26ca02364882aa271205aa59d202f017a9f2b3e2f03265f438c.scope\": RecentStats: unable to find data in memory cache]" Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.698483 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d5ceab22-0611-492c-87aa-66cdb13e8701-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-nxstp\" (UID: \"d5ceab22-0611-492c-87aa-66cdb13e8701\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-nxstp" Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.698544 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp6vn\" (UniqueName: \"kubernetes.io/projected/d5ceab22-0611-492c-87aa-66cdb13e8701-kube-api-access-lp6vn\") pod \"ovnkube-control-plane-97c9b6c48-nxstp\" (UID: \"d5ceab22-0611-492c-87aa-66cdb13e8701\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-nxstp" Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.698573 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d5ceab22-0611-492c-87aa-66cdb13e8701-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-nxstp\" (UID: \"d5ceab22-0611-492c-87aa-66cdb13e8701\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-nxstp" Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.698658 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d5ceab22-0611-492c-87aa-66cdb13e8701-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-nxstp\" (UID: \"d5ceab22-0611-492c-87aa-66cdb13e8701\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-nxstp" Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.698720 5107 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ff511768-9c0a-4c27-a386-24c9cd8c4eac-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.698829 5107 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ff511768-9c0a-4c27-a386-24c9cd8c4eac-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.698924 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pzt7b\" (UniqueName: \"kubernetes.io/projected/ff511768-9c0a-4c27-a386-24c9cd8c4eac-kube-api-access-pzt7b\") on node \"crc\" DevicePath \"\"" Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.698980 5107 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ff511768-9c0a-4c27-a386-24c9cd8c4eac-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.800426 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d5ceab22-0611-492c-87aa-66cdb13e8701-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-nxstp\" (UID: \"d5ceab22-0611-492c-87aa-66cdb13e8701\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-nxstp" Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.800943 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lp6vn\" (UniqueName: \"kubernetes.io/projected/d5ceab22-0611-492c-87aa-66cdb13e8701-kube-api-access-lp6vn\") pod \"ovnkube-control-plane-97c9b6c48-nxstp\" (UID: \"d5ceab22-0611-492c-87aa-66cdb13e8701\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-nxstp" Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.800981 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d5ceab22-0611-492c-87aa-66cdb13e8701-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-nxstp\" (UID: \"d5ceab22-0611-492c-87aa-66cdb13e8701\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-nxstp" Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.801060 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d5ceab22-0611-492c-87aa-66cdb13e8701-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-nxstp\" (UID: \"d5ceab22-0611-492c-87aa-66cdb13e8701\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-nxstp" Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.801895 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d5ceab22-0611-492c-87aa-66cdb13e8701-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-nxstp\" (UID: \"d5ceab22-0611-492c-87aa-66cdb13e8701\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-nxstp" Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.801987 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d5ceab22-0611-492c-87aa-66cdb13e8701-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-nxstp\" (UID: \"d5ceab22-0611-492c-87aa-66cdb13e8701\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-nxstp" Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.804854 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d5ceab22-0611-492c-87aa-66cdb13e8701-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-nxstp\" (UID: \"d5ceab22-0611-492c-87aa-66cdb13e8701\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-nxstp" Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.822400 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp6vn\" (UniqueName: \"kubernetes.io/projected/d5ceab22-0611-492c-87aa-66cdb13e8701-kube-api-access-lp6vn\") pod \"ovnkube-control-plane-97c9b6c48-nxstp\" (UID: \"d5ceab22-0611-492c-87aa-66cdb13e8701\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-nxstp" Feb 20 00:18:21 crc kubenswrapper[5107]: I0220 00:18:21.984825 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-nxstp" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.046028 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fnskd_c9d08e95-6328-4e97-aab4-4dd9913914cc/kube-multus/0.log" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.046104 5107 generic.go:358] "Generic (PLEG): container finished" podID="c9d08e95-6328-4e97-aab4-4dd9913914cc" containerID="11d929442d04c0332dcbbdccbd6f701427a86529b8e63352dc475bc4e2927653" exitCode=2 Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.046306 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fnskd" event={"ID":"c9d08e95-6328-4e97-aab4-4dd9913914cc","Type":"ContainerDied","Data":"11d929442d04c0332dcbbdccbd6f701427a86529b8e63352dc475bc4e2927653"} Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.048381 5107 scope.go:117] "RemoveContainer" containerID="11d929442d04c0332dcbbdccbd6f701427a86529b8e63352dc475bc4e2927653" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.050769 5107 generic.go:358] "Generic (PLEG): container finished" podID="ff511768-9c0a-4c27-a386-24c9cd8c4eac" containerID="0dd3a7fc07e23f34d434794c443bd144b4c50c362794b29e34d8de15180c4fb0" exitCode=0 Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.050804 5107 generic.go:358] "Generic (PLEG): container finished" podID="ff511768-9c0a-4c27-a386-24c9cd8c4eac" containerID="6a6c2f36d65986162afcce08e460081a513c8f2aef40f460917dec3baefb8189" exitCode=0 Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.050857 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bf9fp" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.051061 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bf9fp" event={"ID":"ff511768-9c0a-4c27-a386-24c9cd8c4eac","Type":"ContainerDied","Data":"0dd3a7fc07e23f34d434794c443bd144b4c50c362794b29e34d8de15180c4fb0"} Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.051102 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bf9fp" event={"ID":"ff511768-9c0a-4c27-a386-24c9cd8c4eac","Type":"ContainerDied","Data":"6a6c2f36d65986162afcce08e460081a513c8f2aef40f460917dec3baefb8189"} Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.051125 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bf9fp" event={"ID":"ff511768-9c0a-4c27-a386-24c9cd8c4eac","Type":"ContainerDied","Data":"2ce5e4357c6925a1748bb64dbd95e0e72c4edec329385cfe62d873a2808c712e"} Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.051213 5107 scope.go:117] "RemoveContainer" containerID="0dd3a7fc07e23f34d434794c443bd144b4c50c362794b29e34d8de15180c4fb0" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.063615 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glc89_af6b562f-b4f7-400d-b6c2-cf5df40d6eaf/ovn-acl-logging/0.log" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.076440 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glc89_af6b562f-b4f7-400d-b6c2-cf5df40d6eaf/ovn-controller/0.log" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.077349 5107 generic.go:358] "Generic (PLEG): container finished" podID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerID="cbb81ffa831c22f321eaa8ce88f7ce0c8849e0f376587a9a80c09113a0c3795b" exitCode=0 Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.077722 5107 generic.go:358] "Generic (PLEG): container finished" podID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerID="683f7d09537d646287589f9dd2c0eedb6307891e7d1e994a18977f335ed81f35" exitCode=0 Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.077866 5107 generic.go:358] "Generic (PLEG): container finished" podID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerID="e4c55497b2c440e7767f3c6602b03f849de2496fa8c6f6649136e91fbadb1b39" exitCode=0 Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.077986 5107 generic.go:358] "Generic (PLEG): container finished" podID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerID="9cd5146ae94c10b4e5bfdd7e8e3bf42cc6d407c2358db371e6e746bbdde1cc93" exitCode=0 Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.078114 5107 generic.go:358] "Generic (PLEG): container finished" podID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerID="5fb6012486258bbd77a807a27c4c7bb0f8eef75d373a44256da52f1156c64c1d" exitCode=0 Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.078340 5107 generic.go:358] "Generic (PLEG): container finished" podID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerID="8d37144fd243566e06cd611899402a4dd9aafa7b9bad63dc2877f81d26f164e4" exitCode=0 Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.078478 5107 generic.go:358] "Generic (PLEG): container finished" podID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerID="877ba0c2d848e84f3ff10b987145674d55fe60ec255efc10f605f24b98963c96" exitCode=143 Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.078595 5107 generic.go:358] "Generic (PLEG): container finished" podID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerID="a5acd80a95a9f26ca02364882aa271205aa59d202f017a9f2b3e2f03265f438c" exitCode=143 Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.077508 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glc89" event={"ID":"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf","Type":"ContainerDied","Data":"cbb81ffa831c22f321eaa8ce88f7ce0c8849e0f376587a9a80c09113a0c3795b"} Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.078874 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glc89" event={"ID":"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf","Type":"ContainerDied","Data":"683f7d09537d646287589f9dd2c0eedb6307891e7d1e994a18977f335ed81f35"} Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.078915 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glc89" event={"ID":"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf","Type":"ContainerDied","Data":"e4c55497b2c440e7767f3c6602b03f849de2496fa8c6f6649136e91fbadb1b39"} Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.078946 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glc89" event={"ID":"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf","Type":"ContainerDied","Data":"9cd5146ae94c10b4e5bfdd7e8e3bf42cc6d407c2358db371e6e746bbdde1cc93"} Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.078972 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glc89" event={"ID":"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf","Type":"ContainerDied","Data":"5fb6012486258bbd77a807a27c4c7bb0f8eef75d373a44256da52f1156c64c1d"} Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.078994 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glc89" event={"ID":"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf","Type":"ContainerDied","Data":"8d37144fd243566e06cd611899402a4dd9aafa7b9bad63dc2877f81d26f164e4"} Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.079017 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glc89" event={"ID":"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf","Type":"ContainerDied","Data":"877ba0c2d848e84f3ff10b987145674d55fe60ec255efc10f605f24b98963c96"} Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.079040 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glc89" event={"ID":"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf","Type":"ContainerDied","Data":"a5acd80a95a9f26ca02364882aa271205aa59d202f017a9f2b3e2f03265f438c"} Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.086780 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-nxstp" event={"ID":"d5ceab22-0611-492c-87aa-66cdb13e8701","Type":"ContainerStarted","Data":"9a7a2942b75508b20020acae3b5fd7bda566e213d516b0815fd252c6115ae009"} Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.096747 5107 scope.go:117] "RemoveContainer" containerID="6a6c2f36d65986162afcce08e460081a513c8f2aef40f460917dec3baefb8189" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.117898 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bf9fp"] Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.121474 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-bf9fp"] Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.131829 5107 scope.go:117] "RemoveContainer" containerID="0dd3a7fc07e23f34d434794c443bd144b4c50c362794b29e34d8de15180c4fb0" Feb 20 00:18:22 crc kubenswrapper[5107]: E0220 00:18:22.132461 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dd3a7fc07e23f34d434794c443bd144b4c50c362794b29e34d8de15180c4fb0\": container with ID starting with 0dd3a7fc07e23f34d434794c443bd144b4c50c362794b29e34d8de15180c4fb0 not found: ID does not exist" containerID="0dd3a7fc07e23f34d434794c443bd144b4c50c362794b29e34d8de15180c4fb0" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.132531 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dd3a7fc07e23f34d434794c443bd144b4c50c362794b29e34d8de15180c4fb0"} err="failed to get container status \"0dd3a7fc07e23f34d434794c443bd144b4c50c362794b29e34d8de15180c4fb0\": rpc error: code = NotFound desc = could not find container \"0dd3a7fc07e23f34d434794c443bd144b4c50c362794b29e34d8de15180c4fb0\": container with ID starting with 0dd3a7fc07e23f34d434794c443bd144b4c50c362794b29e34d8de15180c4fb0 not found: ID does not exist" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.132575 5107 scope.go:117] "RemoveContainer" containerID="6a6c2f36d65986162afcce08e460081a513c8f2aef40f460917dec3baefb8189" Feb 20 00:18:22 crc kubenswrapper[5107]: E0220 00:18:22.133000 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a6c2f36d65986162afcce08e460081a513c8f2aef40f460917dec3baefb8189\": container with ID starting with 6a6c2f36d65986162afcce08e460081a513c8f2aef40f460917dec3baefb8189 not found: ID does not exist" containerID="6a6c2f36d65986162afcce08e460081a513c8f2aef40f460917dec3baefb8189" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.133042 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a6c2f36d65986162afcce08e460081a513c8f2aef40f460917dec3baefb8189"} err="failed to get container status \"6a6c2f36d65986162afcce08e460081a513c8f2aef40f460917dec3baefb8189\": rpc error: code = NotFound desc = could not find container \"6a6c2f36d65986162afcce08e460081a513c8f2aef40f460917dec3baefb8189\": container with ID starting with 6a6c2f36d65986162afcce08e460081a513c8f2aef40f460917dec3baefb8189 not found: ID does not exist" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.133068 5107 scope.go:117] "RemoveContainer" containerID="0dd3a7fc07e23f34d434794c443bd144b4c50c362794b29e34d8de15180c4fb0" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.133552 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dd3a7fc07e23f34d434794c443bd144b4c50c362794b29e34d8de15180c4fb0"} err="failed to get container status \"0dd3a7fc07e23f34d434794c443bd144b4c50c362794b29e34d8de15180c4fb0\": rpc error: code = NotFound desc = could not find container \"0dd3a7fc07e23f34d434794c443bd144b4c50c362794b29e34d8de15180c4fb0\": container with ID starting with 0dd3a7fc07e23f34d434794c443bd144b4c50c362794b29e34d8de15180c4fb0 not found: ID does not exist" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.133587 5107 scope.go:117] "RemoveContainer" containerID="6a6c2f36d65986162afcce08e460081a513c8f2aef40f460917dec3baefb8189" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.133954 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a6c2f36d65986162afcce08e460081a513c8f2aef40f460917dec3baefb8189"} err="failed to get container status \"6a6c2f36d65986162afcce08e460081a513c8f2aef40f460917dec3baefb8189\": rpc error: code = NotFound desc = could not find container \"6a6c2f36d65986162afcce08e460081a513c8f2aef40f460917dec3baefb8189\": container with ID starting with 6a6c2f36d65986162afcce08e460081a513c8f2aef40f460917dec3baefb8189 not found: ID does not exist" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.233628 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glc89_af6b562f-b4f7-400d-b6c2-cf5df40d6eaf/ovn-acl-logging/0.log" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.234336 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glc89_af6b562f-b4f7-400d-b6c2-cf5df40d6eaf/ovn-controller/0.log" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.234934 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.301590 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2577r"] Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.302348 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerName="northd" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.302371 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerName="northd" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.302394 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerName="kube-rbac-proxy-node" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.302404 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerName="kube-rbac-proxy-node" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.303173 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerName="kubecfg-setup" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.303195 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerName="kubecfg-setup" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.303225 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerName="kube-rbac-proxy-ovn-metrics" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.303235 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerName="kube-rbac-proxy-ovn-metrics" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.303244 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerName="nbdb" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.303252 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerName="nbdb" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.303268 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerName="ovn-controller" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.303276 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerName="ovn-controller" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.303290 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerName="sbdb" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.303298 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerName="sbdb" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.303319 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerName="ovnkube-controller" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.303329 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerName="ovnkube-controller" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.303342 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerName="ovn-acl-logging" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.303349 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerName="ovn-acl-logging" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.303478 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerName="kube-rbac-proxy-ovn-metrics" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.303494 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerName="ovn-acl-logging" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.303510 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerName="ovnkube-controller" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.303523 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerName="nbdb" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.303535 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerName="northd" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.303545 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerName="ovn-controller" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.303554 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerName="sbdb" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.303565 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" containerName="kube-rbac-proxy-node" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.308317 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-run-ovn\") pod \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.308387 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-ovnkube-config\") pod \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.308422 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-cni-bin\") pod \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.308448 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-log-socket\") pod \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.308482 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-node-log\") pod \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.308508 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-kubelet\") pod \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.308548 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-cni-netd\") pod \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.308575 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhdnk\" (UniqueName: \"kubernetes.io/projected/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-kube-api-access-qhdnk\") pod \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.308604 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-env-overrides\") pod \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.308642 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-run-systemd\") pod \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.308665 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.308694 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-var-lib-openvswitch\") pod \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.308723 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-run-netns\") pod \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.308765 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-ovn-node-metrics-cert\") pod \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.308788 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-etc-openvswitch\") pod \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.308896 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-run-openvswitch\") pod \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.308929 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-run-ovn-kubernetes\") pod \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.308950 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-systemd-units\") pod \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.308971 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-ovnkube-script-lib\") pod \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.308996 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-slash\") pod \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\" (UID: \"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf\") " Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.309342 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-slash" (OuterVolumeSpecName: "host-slash") pod "af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" (UID: "af6b562f-b4f7-400d-b6c2-cf5df40d6eaf"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.309384 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" (UID: "af6b562f-b4f7-400d-b6c2-cf5df40d6eaf"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.310320 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" (UID: "af6b562f-b4f7-400d-b6c2-cf5df40d6eaf"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.310350 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" (UID: "af6b562f-b4f7-400d-b6c2-cf5df40d6eaf"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.310368 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-log-socket" (OuterVolumeSpecName: "log-socket") pod "af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" (UID: "af6b562f-b4f7-400d-b6c2-cf5df40d6eaf"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.310386 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-node-log" (OuterVolumeSpecName: "node-log") pod "af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" (UID: "af6b562f-b4f7-400d-b6c2-cf5df40d6eaf"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.310403 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" (UID: "af6b562f-b4f7-400d-b6c2-cf5df40d6eaf"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.310419 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" (UID: "af6b562f-b4f7-400d-b6c2-cf5df40d6eaf"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.318708 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.319685 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" (UID: "af6b562f-b4f7-400d-b6c2-cf5df40d6eaf"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.319766 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" (UID: "af6b562f-b4f7-400d-b6c2-cf5df40d6eaf"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.319801 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" (UID: "af6b562f-b4f7-400d-b6c2-cf5df40d6eaf"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.319828 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" (UID: "af6b562f-b4f7-400d-b6c2-cf5df40d6eaf"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.320453 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" (UID: "af6b562f-b4f7-400d-b6c2-cf5df40d6eaf"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.320499 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" (UID: "af6b562f-b4f7-400d-b6c2-cf5df40d6eaf"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.320653 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" (UID: "af6b562f-b4f7-400d-b6c2-cf5df40d6eaf"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.320712 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" (UID: "af6b562f-b4f7-400d-b6c2-cf5df40d6eaf"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.320896 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" (UID: "af6b562f-b4f7-400d-b6c2-cf5df40d6eaf"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.323717 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" (UID: "af6b562f-b4f7-400d-b6c2-cf5df40d6eaf"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.327309 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-kube-api-access-qhdnk" (OuterVolumeSpecName: "kube-api-access-qhdnk") pod "af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" (UID: "af6b562f-b4f7-400d-b6c2-cf5df40d6eaf"). InnerVolumeSpecName "kube-api-access-qhdnk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.334865 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" (UID: "af6b562f-b4f7-400d-b6c2-cf5df40d6eaf"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.410753 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-host-run-ovn-kubernetes\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.411157 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-var-lib-openvswitch\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.411206 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-host-cni-netd\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.411232 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-run-openvswitch\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.411247 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cfc7b559-56f6-49f7-9663-2a932deaff42-ovnkube-config\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.411261 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-etc-openvswitch\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.411282 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-host-run-netns\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.411298 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-host-kubelet\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.411320 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-log-socket\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.411342 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-host-cni-bin\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.411366 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cfc7b559-56f6-49f7-9663-2a932deaff42-ovn-node-metrics-cert\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.411381 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-run-systemd\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.411398 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6nss\" (UniqueName: \"kubernetes.io/projected/cfc7b559-56f6-49f7-9663-2a932deaff42-kube-api-access-l6nss\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.411415 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.411439 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cfc7b559-56f6-49f7-9663-2a932deaff42-ovnkube-script-lib\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.411462 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cfc7b559-56f6-49f7-9663-2a932deaff42-env-overrides\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.411484 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-systemd-units\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.411501 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-run-ovn\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.411533 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-host-slash\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.411558 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-node-log\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.411592 5107 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.411602 5107 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.411610 5107 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.411618 5107 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.411626 5107 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-slash\") on node \"crc\" DevicePath \"\"" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.411635 5107 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.411644 5107 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.411652 5107 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.411659 5107 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-log-socket\") on node \"crc\" DevicePath \"\"" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.411667 5107 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-node-log\") on node \"crc\" DevicePath \"\"" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.411675 5107 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.411682 5107 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.411691 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qhdnk\" (UniqueName: \"kubernetes.io/projected/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-kube-api-access-qhdnk\") on node \"crc\" DevicePath \"\"" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.411698 5107 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.411707 5107 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.411715 5107 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.411723 5107 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.411732 5107 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.411741 5107 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.411750 5107 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.495933 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff511768-9c0a-4c27-a386-24c9cd8c4eac" path="/var/lib/kubelet/pods/ff511768-9c0a-4c27-a386-24c9cd8c4eac/volumes" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.512370 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-node-log\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.512443 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-host-run-ovn-kubernetes\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.512482 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-var-lib-openvswitch\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.512532 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-node-log\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.512620 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-host-cni-netd\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.512669 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-run-openvswitch\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.512690 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cfc7b559-56f6-49f7-9663-2a932deaff42-ovnkube-config\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.512736 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-etc-openvswitch\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.512771 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-var-lib-openvswitch\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.512784 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-run-openvswitch\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.512849 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-etc-openvswitch\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.512779 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-host-run-netns\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.512902 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-host-kubelet\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.512800 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-host-run-netns\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.512942 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-log-socket\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.512982 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-host-cni-bin\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.513015 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-host-kubelet\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.513024 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-host-cni-bin\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.512913 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-host-cni-netd\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.513108 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cfc7b559-56f6-49f7-9663-2a932deaff42-ovn-node-metrics-cert\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.513186 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-run-systemd\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.513137 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-log-socket\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.513233 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l6nss\" (UniqueName: \"kubernetes.io/projected/cfc7b559-56f6-49f7-9663-2a932deaff42-kube-api-access-l6nss\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.513264 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.513311 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cfc7b559-56f6-49f7-9663-2a932deaff42-ovnkube-script-lib\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.513323 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-run-systemd\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.513346 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cfc7b559-56f6-49f7-9663-2a932deaff42-env-overrides\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.513385 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.513391 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-systemd-units\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.513424 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-systemd-units\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.513447 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-run-ovn\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.513477 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cfc7b559-56f6-49f7-9663-2a932deaff42-ovnkube-config\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.513501 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-host-slash\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.513600 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-host-slash\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.513663 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-run-ovn\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.512673 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cfc7b559-56f6-49f7-9663-2a932deaff42-host-run-ovn-kubernetes\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.514011 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cfc7b559-56f6-49f7-9663-2a932deaff42-env-overrides\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.514564 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cfc7b559-56f6-49f7-9663-2a932deaff42-ovnkube-script-lib\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.520721 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cfc7b559-56f6-49f7-9663-2a932deaff42-ovn-node-metrics-cert\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.532107 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6nss\" (UniqueName: \"kubernetes.io/projected/cfc7b559-56f6-49f7-9663-2a932deaff42-kube-api-access-l6nss\") pod \"ovnkube-node-2577r\" (UID: \"cfc7b559-56f6-49f7-9663-2a932deaff42\") " pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: I0220 00:18:22.652807 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:22 crc kubenswrapper[5107]: W0220 00:18:22.675388 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcfc7b559_56f6_49f7_9663_2a932deaff42.slice/crio-6061578c99b395ff0bb701a858952fa2f05fc62fd1464fb84225c5f799f1e125 WatchSource:0}: Error finding container 6061578c99b395ff0bb701a858952fa2f05fc62fd1464fb84225c5f799f1e125: Status 404 returned error can't find the container with id 6061578c99b395ff0bb701a858952fa2f05fc62fd1464fb84225c5f799f1e125 Feb 20 00:18:23 crc kubenswrapper[5107]: I0220 00:18:23.098597 5107 generic.go:358] "Generic (PLEG): container finished" podID="cfc7b559-56f6-49f7-9663-2a932deaff42" containerID="c0b8b82ab28203ddeec97cf4f3a8b364e9904f9245f585c6e2666cca94374100" exitCode=0 Feb 20 00:18:23 crc kubenswrapper[5107]: I0220 00:18:23.098718 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2577r" event={"ID":"cfc7b559-56f6-49f7-9663-2a932deaff42","Type":"ContainerDied","Data":"c0b8b82ab28203ddeec97cf4f3a8b364e9904f9245f585c6e2666cca94374100"} Feb 20 00:18:23 crc kubenswrapper[5107]: I0220 00:18:23.098776 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2577r" event={"ID":"cfc7b559-56f6-49f7-9663-2a932deaff42","Type":"ContainerStarted","Data":"6061578c99b395ff0bb701a858952fa2f05fc62fd1464fb84225c5f799f1e125"} Feb 20 00:18:23 crc kubenswrapper[5107]: I0220 00:18:23.108622 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glc89_af6b562f-b4f7-400d-b6c2-cf5df40d6eaf/ovn-acl-logging/0.log" Feb 20 00:18:23 crc kubenswrapper[5107]: I0220 00:18:23.110006 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glc89_af6b562f-b4f7-400d-b6c2-cf5df40d6eaf/ovn-controller/0.log" Feb 20 00:18:23 crc kubenswrapper[5107]: I0220 00:18:23.110934 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glc89" event={"ID":"af6b562f-b4f7-400d-b6c2-cf5df40d6eaf","Type":"ContainerDied","Data":"600f5d41bba18f5b995d26c293f990882bfa7d486b864f5613eff555c18a946d"} Feb 20 00:18:23 crc kubenswrapper[5107]: I0220 00:18:23.111024 5107 scope.go:117] "RemoveContainer" containerID="cbb81ffa831c22f321eaa8ce88f7ce0c8849e0f376587a9a80c09113a0c3795b" Feb 20 00:18:23 crc kubenswrapper[5107]: I0220 00:18:23.111031 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-glc89" Feb 20 00:18:23 crc kubenswrapper[5107]: I0220 00:18:23.117401 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-nxstp" event={"ID":"d5ceab22-0611-492c-87aa-66cdb13e8701","Type":"ContainerStarted","Data":"6577ac9b2e55c9673629f10002c3f9a136c3fd5c8563e74ac4985918ca7b9de0"} Feb 20 00:18:23 crc kubenswrapper[5107]: I0220 00:18:23.117469 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-nxstp" event={"ID":"d5ceab22-0611-492c-87aa-66cdb13e8701","Type":"ContainerStarted","Data":"8ce93ae935fc66dac5e6846559ccb38e5f3d41241f485ac0ad9bce53c52a79a9"} Feb 20 00:18:23 crc kubenswrapper[5107]: I0220 00:18:23.121982 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fnskd_c9d08e95-6328-4e97-aab4-4dd9913914cc/kube-multus/0.log" Feb 20 00:18:23 crc kubenswrapper[5107]: I0220 00:18:23.122187 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fnskd" event={"ID":"c9d08e95-6328-4e97-aab4-4dd9913914cc","Type":"ContainerStarted","Data":"c2eb32cf8efb68ef20b4166be894a695a3adbc3bddf9c163acbe4996e62d6c18"} Feb 20 00:18:23 crc kubenswrapper[5107]: I0220 00:18:23.158462 5107 scope.go:117] "RemoveContainer" containerID="683f7d09537d646287589f9dd2c0eedb6307891e7d1e994a18977f335ed81f35" Feb 20 00:18:23 crc kubenswrapper[5107]: I0220 00:18:23.175038 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-nxstp" podStartSLOduration=2.175017044 podStartE2EDuration="2.175017044s" podCreationTimestamp="2026-02-20 00:18:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:18:23.171124408 +0000 UTC m=+589.539781984" watchObservedRunningTime="2026-02-20 00:18:23.175017044 +0000 UTC m=+589.543674620" Feb 20 00:18:23 crc kubenswrapper[5107]: I0220 00:18:23.189843 5107 scope.go:117] "RemoveContainer" containerID="e4c55497b2c440e7767f3c6602b03f849de2496fa8c6f6649136e91fbadb1b39" Feb 20 00:18:23 crc kubenswrapper[5107]: I0220 00:18:23.214158 5107 scope.go:117] "RemoveContainer" containerID="9cd5146ae94c10b4e5bfdd7e8e3bf42cc6d407c2358db371e6e746bbdde1cc93" Feb 20 00:18:23 crc kubenswrapper[5107]: I0220 00:18:23.225872 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-glc89"] Feb 20 00:18:23 crc kubenswrapper[5107]: I0220 00:18:23.230114 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-glc89"] Feb 20 00:18:23 crc kubenswrapper[5107]: I0220 00:18:23.244493 5107 scope.go:117] "RemoveContainer" containerID="5fb6012486258bbd77a807a27c4c7bb0f8eef75d373a44256da52f1156c64c1d" Feb 20 00:18:23 crc kubenswrapper[5107]: I0220 00:18:23.259473 5107 scope.go:117] "RemoveContainer" containerID="8d37144fd243566e06cd611899402a4dd9aafa7b9bad63dc2877f81d26f164e4" Feb 20 00:18:23 crc kubenswrapper[5107]: I0220 00:18:23.274426 5107 scope.go:117] "RemoveContainer" containerID="877ba0c2d848e84f3ff10b987145674d55fe60ec255efc10f605f24b98963c96" Feb 20 00:18:23 crc kubenswrapper[5107]: I0220 00:18:23.298307 5107 scope.go:117] "RemoveContainer" containerID="a5acd80a95a9f26ca02364882aa271205aa59d202f017a9f2b3e2f03265f438c" Feb 20 00:18:23 crc kubenswrapper[5107]: I0220 00:18:23.317841 5107 scope.go:117] "RemoveContainer" containerID="9968bd56bbece179916db8507f600f115c6f287694215d673c8f6a2b4dbc2d93" Feb 20 00:18:24 crc kubenswrapper[5107]: I0220 00:18:24.133640 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2577r" event={"ID":"cfc7b559-56f6-49f7-9663-2a932deaff42","Type":"ContainerStarted","Data":"5fdfbd01a1173a35c15995261e1806cf22c6873ce80581793c2f217f17d08ec5"} Feb 20 00:18:24 crc kubenswrapper[5107]: I0220 00:18:24.134100 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2577r" event={"ID":"cfc7b559-56f6-49f7-9663-2a932deaff42","Type":"ContainerStarted","Data":"246a91a600f30827bff7f7f04951af132e841e129cf8795f057cb558441aec6c"} Feb 20 00:18:24 crc kubenswrapper[5107]: I0220 00:18:24.134119 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2577r" event={"ID":"cfc7b559-56f6-49f7-9663-2a932deaff42","Type":"ContainerStarted","Data":"d8c855e8b62f225b2c8b403e7e6845bd34a4d8a27a68509d00e9933b2401fad3"} Feb 20 00:18:24 crc kubenswrapper[5107]: I0220 00:18:24.134131 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2577r" event={"ID":"cfc7b559-56f6-49f7-9663-2a932deaff42","Type":"ContainerStarted","Data":"bb52184396f6f3b2410bd031f1fa8d8aa8437eceb5d9b67274b9e41440acfc1d"} Feb 20 00:18:24 crc kubenswrapper[5107]: I0220 00:18:24.134158 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2577r" event={"ID":"cfc7b559-56f6-49f7-9663-2a932deaff42","Type":"ContainerStarted","Data":"d96777691a82d72088a37bf6dc562356b1db003399c299401ff5a891e8f47cdc"} Feb 20 00:18:24 crc kubenswrapper[5107]: I0220 00:18:24.134170 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2577r" event={"ID":"cfc7b559-56f6-49f7-9663-2a932deaff42","Type":"ContainerStarted","Data":"7345e7cd7d2123d111332880cd9b92d3b4f753563c0216ec80fb335913b03a70"} Feb 20 00:18:24 crc kubenswrapper[5107]: I0220 00:18:24.499914 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af6b562f-b4f7-400d-b6c2-cf5df40d6eaf" path="/var/lib/kubelet/pods/af6b562f-b4f7-400d-b6c2-cf5df40d6eaf/volumes" Feb 20 00:18:27 crc kubenswrapper[5107]: I0220 00:18:27.166648 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2577r" event={"ID":"cfc7b559-56f6-49f7-9663-2a932deaff42","Type":"ContainerStarted","Data":"d2ec8e704ac173def7a69b7f1beeacfc71a5616ad31ee059724fd504fed621a8"} Feb 20 00:18:29 crc kubenswrapper[5107]: I0220 00:18:29.185179 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2577r" event={"ID":"cfc7b559-56f6-49f7-9663-2a932deaff42","Type":"ContainerStarted","Data":"ce5cfb146ca3a16a6895936639ddfcb2eeba7fd7b58dfb0b318c1d10abf1e6ce"} Feb 20 00:18:29 crc kubenswrapper[5107]: I0220 00:18:29.186077 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:29 crc kubenswrapper[5107]: I0220 00:18:29.186099 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:29 crc kubenswrapper[5107]: I0220 00:18:29.186116 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:29 crc kubenswrapper[5107]: I0220 00:18:29.216860 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-2577r" podStartSLOduration=7.216835561 podStartE2EDuration="7.216835561s" podCreationTimestamp="2026-02-20 00:18:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:18:29.214099736 +0000 UTC m=+595.582757302" watchObservedRunningTime="2026-02-20 00:18:29.216835561 +0000 UTC m=+595.585493127" Feb 20 00:18:29 crc kubenswrapper[5107]: I0220 00:18:29.231077 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:29 crc kubenswrapper[5107]: I0220 00:18:29.255486 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:18:34 crc kubenswrapper[5107]: I0220 00:18:34.764849 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fnskd_c9d08e95-6328-4e97-aab4-4dd9913914cc/kube-multus/0.log" Feb 20 00:18:34 crc kubenswrapper[5107]: I0220 00:18:34.771446 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fnskd_c9d08e95-6328-4e97-aab4-4dd9913914cc/kube-multus/0.log" Feb 20 00:18:34 crc kubenswrapper[5107]: I0220 00:18:34.773419 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 20 00:18:34 crc kubenswrapper[5107]: I0220 00:18:34.781397 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 20 00:19:01 crc kubenswrapper[5107]: I0220 00:19:01.229158 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2577r" Feb 20 00:19:17 crc kubenswrapper[5107]: I0220 00:19:17.717852 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5shbf"] Feb 20 00:19:17 crc kubenswrapper[5107]: I0220 00:19:17.719046 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5shbf" podUID="10fea87c-1407-4938-96ac-727dfe224f63" containerName="registry-server" containerID="cri-o://bf603dac2190d6348e91edec61ccf102e3f4fc22ed06db3c04869119c682e0ad" gracePeriod=30 Feb 20 00:19:18 crc kubenswrapper[5107]: I0220 00:19:18.048185 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5shbf" Feb 20 00:19:18 crc kubenswrapper[5107]: I0220 00:19:18.115340 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82kpt\" (UniqueName: \"kubernetes.io/projected/10fea87c-1407-4938-96ac-727dfe224f63-kube-api-access-82kpt\") pod \"10fea87c-1407-4938-96ac-727dfe224f63\" (UID: \"10fea87c-1407-4938-96ac-727dfe224f63\") " Feb 20 00:19:18 crc kubenswrapper[5107]: I0220 00:19:18.115411 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10fea87c-1407-4938-96ac-727dfe224f63-utilities\") pod \"10fea87c-1407-4938-96ac-727dfe224f63\" (UID: \"10fea87c-1407-4938-96ac-727dfe224f63\") " Feb 20 00:19:18 crc kubenswrapper[5107]: I0220 00:19:18.115478 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10fea87c-1407-4938-96ac-727dfe224f63-catalog-content\") pod \"10fea87c-1407-4938-96ac-727dfe224f63\" (UID: \"10fea87c-1407-4938-96ac-727dfe224f63\") " Feb 20 00:19:18 crc kubenswrapper[5107]: I0220 00:19:18.117522 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10fea87c-1407-4938-96ac-727dfe224f63-utilities" (OuterVolumeSpecName: "utilities") pod "10fea87c-1407-4938-96ac-727dfe224f63" (UID: "10fea87c-1407-4938-96ac-727dfe224f63"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:19:18 crc kubenswrapper[5107]: I0220 00:19:18.122427 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10fea87c-1407-4938-96ac-727dfe224f63-kube-api-access-82kpt" (OuterVolumeSpecName: "kube-api-access-82kpt") pod "10fea87c-1407-4938-96ac-727dfe224f63" (UID: "10fea87c-1407-4938-96ac-727dfe224f63"). InnerVolumeSpecName "kube-api-access-82kpt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:19:18 crc kubenswrapper[5107]: I0220 00:19:18.130903 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10fea87c-1407-4938-96ac-727dfe224f63-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10fea87c-1407-4938-96ac-727dfe224f63" (UID: "10fea87c-1407-4938-96ac-727dfe224f63"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:19:18 crc kubenswrapper[5107]: I0220 00:19:18.216936 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-82kpt\" (UniqueName: \"kubernetes.io/projected/10fea87c-1407-4938-96ac-727dfe224f63-kube-api-access-82kpt\") on node \"crc\" DevicePath \"\"" Feb 20 00:19:18 crc kubenswrapper[5107]: I0220 00:19:18.216991 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10fea87c-1407-4938-96ac-727dfe224f63-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:19:18 crc kubenswrapper[5107]: I0220 00:19:18.217010 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10fea87c-1407-4938-96ac-727dfe224f63-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:19:18 crc kubenswrapper[5107]: I0220 00:19:18.545705 5107 generic.go:358] "Generic (PLEG): container finished" podID="10fea87c-1407-4938-96ac-727dfe224f63" containerID="bf603dac2190d6348e91edec61ccf102e3f4fc22ed06db3c04869119c682e0ad" exitCode=0 Feb 20 00:19:18 crc kubenswrapper[5107]: I0220 00:19:18.545780 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5shbf" event={"ID":"10fea87c-1407-4938-96ac-727dfe224f63","Type":"ContainerDied","Data":"bf603dac2190d6348e91edec61ccf102e3f4fc22ed06db3c04869119c682e0ad"} Feb 20 00:19:18 crc kubenswrapper[5107]: I0220 00:19:18.545855 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5shbf" event={"ID":"10fea87c-1407-4938-96ac-727dfe224f63","Type":"ContainerDied","Data":"02034380e194cb6cf30d53b5a53d2c6781e992f74ec46f5dd7d66fad9813f276"} Feb 20 00:19:18 crc kubenswrapper[5107]: I0220 00:19:18.545895 5107 scope.go:117] "RemoveContainer" containerID="bf603dac2190d6348e91edec61ccf102e3f4fc22ed06db3c04869119c682e0ad" Feb 20 00:19:18 crc kubenswrapper[5107]: I0220 00:19:18.545924 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5shbf" Feb 20 00:19:18 crc kubenswrapper[5107]: I0220 00:19:18.573520 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5shbf"] Feb 20 00:19:18 crc kubenswrapper[5107]: I0220 00:19:18.582431 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5shbf"] Feb 20 00:19:18 crc kubenswrapper[5107]: I0220 00:19:18.583185 5107 scope.go:117] "RemoveContainer" containerID="06289815dee05c97e7c8be61fc3c1eb38c0ffd42015231890d56bb716c2f3cf1" Feb 20 00:19:18 crc kubenswrapper[5107]: I0220 00:19:18.611068 5107 scope.go:117] "RemoveContainer" containerID="e013fae56b7002233105abc400fca8791f85635a7cfff8d1f2c3a29a4f12948b" Feb 20 00:19:18 crc kubenswrapper[5107]: I0220 00:19:18.629172 5107 scope.go:117] "RemoveContainer" containerID="bf603dac2190d6348e91edec61ccf102e3f4fc22ed06db3c04869119c682e0ad" Feb 20 00:19:18 crc kubenswrapper[5107]: E0220 00:19:18.629787 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf603dac2190d6348e91edec61ccf102e3f4fc22ed06db3c04869119c682e0ad\": container with ID starting with bf603dac2190d6348e91edec61ccf102e3f4fc22ed06db3c04869119c682e0ad not found: ID does not exist" containerID="bf603dac2190d6348e91edec61ccf102e3f4fc22ed06db3c04869119c682e0ad" Feb 20 00:19:18 crc kubenswrapper[5107]: I0220 00:19:18.629842 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf603dac2190d6348e91edec61ccf102e3f4fc22ed06db3c04869119c682e0ad"} err="failed to get container status \"bf603dac2190d6348e91edec61ccf102e3f4fc22ed06db3c04869119c682e0ad\": rpc error: code = NotFound desc = could not find container \"bf603dac2190d6348e91edec61ccf102e3f4fc22ed06db3c04869119c682e0ad\": container with ID starting with bf603dac2190d6348e91edec61ccf102e3f4fc22ed06db3c04869119c682e0ad not found: ID does not exist" Feb 20 00:19:18 crc kubenswrapper[5107]: I0220 00:19:18.629870 5107 scope.go:117] "RemoveContainer" containerID="06289815dee05c97e7c8be61fc3c1eb38c0ffd42015231890d56bb716c2f3cf1" Feb 20 00:19:18 crc kubenswrapper[5107]: E0220 00:19:18.630232 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06289815dee05c97e7c8be61fc3c1eb38c0ffd42015231890d56bb716c2f3cf1\": container with ID starting with 06289815dee05c97e7c8be61fc3c1eb38c0ffd42015231890d56bb716c2f3cf1 not found: ID does not exist" containerID="06289815dee05c97e7c8be61fc3c1eb38c0ffd42015231890d56bb716c2f3cf1" Feb 20 00:19:18 crc kubenswrapper[5107]: I0220 00:19:18.630297 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06289815dee05c97e7c8be61fc3c1eb38c0ffd42015231890d56bb716c2f3cf1"} err="failed to get container status \"06289815dee05c97e7c8be61fc3c1eb38c0ffd42015231890d56bb716c2f3cf1\": rpc error: code = NotFound desc = could not find container \"06289815dee05c97e7c8be61fc3c1eb38c0ffd42015231890d56bb716c2f3cf1\": container with ID starting with 06289815dee05c97e7c8be61fc3c1eb38c0ffd42015231890d56bb716c2f3cf1 not found: ID does not exist" Feb 20 00:19:18 crc kubenswrapper[5107]: I0220 00:19:18.630337 5107 scope.go:117] "RemoveContainer" containerID="e013fae56b7002233105abc400fca8791f85635a7cfff8d1f2c3a29a4f12948b" Feb 20 00:19:18 crc kubenswrapper[5107]: E0220 00:19:18.631382 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e013fae56b7002233105abc400fca8791f85635a7cfff8d1f2c3a29a4f12948b\": container with ID starting with e013fae56b7002233105abc400fca8791f85635a7cfff8d1f2c3a29a4f12948b not found: ID does not exist" containerID="e013fae56b7002233105abc400fca8791f85635a7cfff8d1f2c3a29a4f12948b" Feb 20 00:19:18 crc kubenswrapper[5107]: I0220 00:19:18.631413 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e013fae56b7002233105abc400fca8791f85635a7cfff8d1f2c3a29a4f12948b"} err="failed to get container status \"e013fae56b7002233105abc400fca8791f85635a7cfff8d1f2c3a29a4f12948b\": rpc error: code = NotFound desc = could not find container \"e013fae56b7002233105abc400fca8791f85635a7cfff8d1f2c3a29a4f12948b\": container with ID starting with e013fae56b7002233105abc400fca8791f85635a7cfff8d1f2c3a29a4f12948b not found: ID does not exist" Feb 20 00:19:20 crc kubenswrapper[5107]: I0220 00:19:20.491825 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10fea87c-1407-4938-96ac-727dfe224f63" path="/var/lib/kubelet/pods/10fea87c-1407-4938-96ac-727dfe224f63/volumes" Feb 20 00:19:21 crc kubenswrapper[5107]: I0220 00:19:21.253602 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xn75s"] Feb 20 00:19:21 crc kubenswrapper[5107]: I0220 00:19:21.254308 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="10fea87c-1407-4938-96ac-727dfe224f63" containerName="extract-utilities" Feb 20 00:19:21 crc kubenswrapper[5107]: I0220 00:19:21.254330 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="10fea87c-1407-4938-96ac-727dfe224f63" containerName="extract-utilities" Feb 20 00:19:21 crc kubenswrapper[5107]: I0220 00:19:21.254343 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="10fea87c-1407-4938-96ac-727dfe224f63" containerName="registry-server" Feb 20 00:19:21 crc kubenswrapper[5107]: I0220 00:19:21.254351 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="10fea87c-1407-4938-96ac-727dfe224f63" containerName="registry-server" Feb 20 00:19:21 crc kubenswrapper[5107]: I0220 00:19:21.254370 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="10fea87c-1407-4938-96ac-727dfe224f63" containerName="extract-content" Feb 20 00:19:21 crc kubenswrapper[5107]: I0220 00:19:21.254377 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="10fea87c-1407-4938-96ac-727dfe224f63" containerName="extract-content" Feb 20 00:19:21 crc kubenswrapper[5107]: I0220 00:19:21.254491 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="10fea87c-1407-4938-96ac-727dfe224f63" containerName="registry-server" Feb 20 00:19:21 crc kubenswrapper[5107]: I0220 00:19:21.266029 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xn75s" Feb 20 00:19:21 crc kubenswrapper[5107]: I0220 00:19:21.270624 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Feb 20 00:19:21 crc kubenswrapper[5107]: I0220 00:19:21.291094 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xn75s"] Feb 20 00:19:21 crc kubenswrapper[5107]: I0220 00:19:21.363459 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ws7v\" (UniqueName: \"kubernetes.io/projected/e8434afa-61e1-42ab-9856-5c4ca12d855e-kube-api-access-7ws7v\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xn75s\" (UID: \"e8434afa-61e1-42ab-9856-5c4ca12d855e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xn75s" Feb 20 00:19:21 crc kubenswrapper[5107]: I0220 00:19:21.363549 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e8434afa-61e1-42ab-9856-5c4ca12d855e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xn75s\" (UID: \"e8434afa-61e1-42ab-9856-5c4ca12d855e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xn75s" Feb 20 00:19:21 crc kubenswrapper[5107]: I0220 00:19:21.363635 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e8434afa-61e1-42ab-9856-5c4ca12d855e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xn75s\" (UID: \"e8434afa-61e1-42ab-9856-5c4ca12d855e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xn75s" Feb 20 00:19:21 crc kubenswrapper[5107]: I0220 00:19:21.464589 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e8434afa-61e1-42ab-9856-5c4ca12d855e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xn75s\" (UID: \"e8434afa-61e1-42ab-9856-5c4ca12d855e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xn75s" Feb 20 00:19:21 crc kubenswrapper[5107]: I0220 00:19:21.464681 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7ws7v\" (UniqueName: \"kubernetes.io/projected/e8434afa-61e1-42ab-9856-5c4ca12d855e-kube-api-access-7ws7v\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xn75s\" (UID: \"e8434afa-61e1-42ab-9856-5c4ca12d855e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xn75s" Feb 20 00:19:21 crc kubenswrapper[5107]: I0220 00:19:21.464704 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e8434afa-61e1-42ab-9856-5c4ca12d855e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xn75s\" (UID: \"e8434afa-61e1-42ab-9856-5c4ca12d855e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xn75s" Feb 20 00:19:21 crc kubenswrapper[5107]: I0220 00:19:21.465113 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e8434afa-61e1-42ab-9856-5c4ca12d855e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xn75s\" (UID: \"e8434afa-61e1-42ab-9856-5c4ca12d855e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xn75s" Feb 20 00:19:21 crc kubenswrapper[5107]: I0220 00:19:21.465166 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e8434afa-61e1-42ab-9856-5c4ca12d855e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xn75s\" (UID: \"e8434afa-61e1-42ab-9856-5c4ca12d855e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xn75s" Feb 20 00:19:21 crc kubenswrapper[5107]: I0220 00:19:21.483805 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ws7v\" (UniqueName: \"kubernetes.io/projected/e8434afa-61e1-42ab-9856-5c4ca12d855e-kube-api-access-7ws7v\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xn75s\" (UID: \"e8434afa-61e1-42ab-9856-5c4ca12d855e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xn75s" Feb 20 00:19:21 crc kubenswrapper[5107]: I0220 00:19:21.584702 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xn75s" Feb 20 00:19:22 crc kubenswrapper[5107]: I0220 00:19:22.038138 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xn75s"] Feb 20 00:19:22 crc kubenswrapper[5107]: I0220 00:19:22.054210 5107 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 20 00:19:22 crc kubenswrapper[5107]: I0220 00:19:22.588329 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xn75s" event={"ID":"e8434afa-61e1-42ab-9856-5c4ca12d855e","Type":"ContainerStarted","Data":"47b840ca1c043e2fa6284d8c8757f4e9bee8bcb77e825c8ad5c0e761efebb10b"} Feb 20 00:19:22 crc kubenswrapper[5107]: I0220 00:19:22.588412 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xn75s" event={"ID":"e8434afa-61e1-42ab-9856-5c4ca12d855e","Type":"ContainerStarted","Data":"d8c9679ba465b3dbb7844b54d7ad63a3f93172149cba499f95d2c0d599bb5526"} Feb 20 00:19:23 crc kubenswrapper[5107]: I0220 00:19:23.598751 5107 generic.go:358] "Generic (PLEG): container finished" podID="e8434afa-61e1-42ab-9856-5c4ca12d855e" containerID="47b840ca1c043e2fa6284d8c8757f4e9bee8bcb77e825c8ad5c0e761efebb10b" exitCode=0 Feb 20 00:19:23 crc kubenswrapper[5107]: I0220 00:19:23.599179 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xn75s" event={"ID":"e8434afa-61e1-42ab-9856-5c4ca12d855e","Type":"ContainerDied","Data":"47b840ca1c043e2fa6284d8c8757f4e9bee8bcb77e825c8ad5c0e761efebb10b"} Feb 20 00:19:25 crc kubenswrapper[5107]: I0220 00:19:25.036586 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hjbc7"] Feb 20 00:19:25 crc kubenswrapper[5107]: I0220 00:19:25.044871 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hjbc7" Feb 20 00:19:25 crc kubenswrapper[5107]: I0220 00:19:25.050348 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hjbc7"] Feb 20 00:19:25 crc kubenswrapper[5107]: I0220 00:19:25.110125 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/918d6f5a-f717-46f4-b49b-b057f68828da-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hjbc7\" (UID: \"918d6f5a-f717-46f4-b49b-b057f68828da\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hjbc7" Feb 20 00:19:25 crc kubenswrapper[5107]: I0220 00:19:25.110196 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/918d6f5a-f717-46f4-b49b-b057f68828da-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hjbc7\" (UID: \"918d6f5a-f717-46f4-b49b-b057f68828da\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hjbc7" Feb 20 00:19:25 crc kubenswrapper[5107]: I0220 00:19:25.110364 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqjjk\" (UniqueName: \"kubernetes.io/projected/918d6f5a-f717-46f4-b49b-b057f68828da-kube-api-access-lqjjk\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hjbc7\" (UID: \"918d6f5a-f717-46f4-b49b-b057f68828da\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hjbc7" Feb 20 00:19:25 crc kubenswrapper[5107]: I0220 00:19:25.212136 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lqjjk\" (UniqueName: \"kubernetes.io/projected/918d6f5a-f717-46f4-b49b-b057f68828da-kube-api-access-lqjjk\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hjbc7\" (UID: \"918d6f5a-f717-46f4-b49b-b057f68828da\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hjbc7" Feb 20 00:19:25 crc kubenswrapper[5107]: I0220 00:19:25.212286 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/918d6f5a-f717-46f4-b49b-b057f68828da-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hjbc7\" (UID: \"918d6f5a-f717-46f4-b49b-b057f68828da\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hjbc7" Feb 20 00:19:25 crc kubenswrapper[5107]: I0220 00:19:25.212325 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/918d6f5a-f717-46f4-b49b-b057f68828da-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hjbc7\" (UID: \"918d6f5a-f717-46f4-b49b-b057f68828da\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hjbc7" Feb 20 00:19:25 crc kubenswrapper[5107]: I0220 00:19:25.212897 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/918d6f5a-f717-46f4-b49b-b057f68828da-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hjbc7\" (UID: \"918d6f5a-f717-46f4-b49b-b057f68828da\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hjbc7" Feb 20 00:19:25 crc kubenswrapper[5107]: I0220 00:19:25.212986 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/918d6f5a-f717-46f4-b49b-b057f68828da-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hjbc7\" (UID: \"918d6f5a-f717-46f4-b49b-b057f68828da\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hjbc7" Feb 20 00:19:25 crc kubenswrapper[5107]: I0220 00:19:25.247765 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqjjk\" (UniqueName: \"kubernetes.io/projected/918d6f5a-f717-46f4-b49b-b057f68828da-kube-api-access-lqjjk\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hjbc7\" (UID: \"918d6f5a-f717-46f4-b49b-b057f68828da\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hjbc7" Feb 20 00:19:25 crc kubenswrapper[5107]: I0220 00:19:25.368636 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hjbc7" Feb 20 00:19:25 crc kubenswrapper[5107]: I0220 00:19:25.613223 5107 generic.go:358] "Generic (PLEG): container finished" podID="e8434afa-61e1-42ab-9856-5c4ca12d855e" containerID="dfbdc00f6ddcda0dd3a2c36b4da26cbe08c84017a4a96974e4bbb66e87733aff" exitCode=0 Feb 20 00:19:25 crc kubenswrapper[5107]: I0220 00:19:25.613274 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xn75s" event={"ID":"e8434afa-61e1-42ab-9856-5c4ca12d855e","Type":"ContainerDied","Data":"dfbdc00f6ddcda0dd3a2c36b4da26cbe08c84017a4a96974e4bbb66e87733aff"} Feb 20 00:19:25 crc kubenswrapper[5107]: I0220 00:19:25.847312 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hjbc7"] Feb 20 00:19:25 crc kubenswrapper[5107]: W0220 00:19:25.852732 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod918d6f5a_f717_46f4_b49b_b057f68828da.slice/crio-2acc569f156897aecebf3bdcec4e657d8aaa51da99424a0a34bbbdf73868cd21 WatchSource:0}: Error finding container 2acc569f156897aecebf3bdcec4e657d8aaa51da99424a0a34bbbdf73868cd21: Status 404 returned error can't find the container with id 2acc569f156897aecebf3bdcec4e657d8aaa51da99424a0a34bbbdf73868cd21 Feb 20 00:19:26 crc kubenswrapper[5107]: I0220 00:19:26.622231 5107 generic.go:358] "Generic (PLEG): container finished" podID="918d6f5a-f717-46f4-b49b-b057f68828da" containerID="ca9235bc9a3aea1ff67bc4555c57e2185859a2ed3375f4d929f98968c612449a" exitCode=0 Feb 20 00:19:26 crc kubenswrapper[5107]: I0220 00:19:26.622336 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hjbc7" event={"ID":"918d6f5a-f717-46f4-b49b-b057f68828da","Type":"ContainerDied","Data":"ca9235bc9a3aea1ff67bc4555c57e2185859a2ed3375f4d929f98968c612449a"} Feb 20 00:19:26 crc kubenswrapper[5107]: I0220 00:19:26.622425 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hjbc7" event={"ID":"918d6f5a-f717-46f4-b49b-b057f68828da","Type":"ContainerStarted","Data":"2acc569f156897aecebf3bdcec4e657d8aaa51da99424a0a34bbbdf73868cd21"} Feb 20 00:19:26 crc kubenswrapper[5107]: I0220 00:19:26.627095 5107 generic.go:358] "Generic (PLEG): container finished" podID="e8434afa-61e1-42ab-9856-5c4ca12d855e" containerID="6e619c112e23dff6ab2cc0f973be41b998b00727b8d713fef7061c718b1c3c57" exitCode=0 Feb 20 00:19:26 crc kubenswrapper[5107]: I0220 00:19:26.627388 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xn75s" event={"ID":"e8434afa-61e1-42ab-9856-5c4ca12d855e","Type":"ContainerDied","Data":"6e619c112e23dff6ab2cc0f973be41b998b00727b8d713fef7061c718b1c3c57"} Feb 20 00:19:28 crc kubenswrapper[5107]: I0220 00:19:28.052470 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xn75s" Feb 20 00:19:28 crc kubenswrapper[5107]: I0220 00:19:28.153157 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e8434afa-61e1-42ab-9856-5c4ca12d855e-bundle\") pod \"e8434afa-61e1-42ab-9856-5c4ca12d855e\" (UID: \"e8434afa-61e1-42ab-9856-5c4ca12d855e\") " Feb 20 00:19:28 crc kubenswrapper[5107]: I0220 00:19:28.153329 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e8434afa-61e1-42ab-9856-5c4ca12d855e-util\") pod \"e8434afa-61e1-42ab-9856-5c4ca12d855e\" (UID: \"e8434afa-61e1-42ab-9856-5c4ca12d855e\") " Feb 20 00:19:28 crc kubenswrapper[5107]: I0220 00:19:28.153466 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ws7v\" (UniqueName: \"kubernetes.io/projected/e8434afa-61e1-42ab-9856-5c4ca12d855e-kube-api-access-7ws7v\") pod \"e8434afa-61e1-42ab-9856-5c4ca12d855e\" (UID: \"e8434afa-61e1-42ab-9856-5c4ca12d855e\") " Feb 20 00:19:28 crc kubenswrapper[5107]: I0220 00:19:28.155908 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8434afa-61e1-42ab-9856-5c4ca12d855e-bundle" (OuterVolumeSpecName: "bundle") pod "e8434afa-61e1-42ab-9856-5c4ca12d855e" (UID: "e8434afa-61e1-42ab-9856-5c4ca12d855e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:19:28 crc kubenswrapper[5107]: I0220 00:19:28.159565 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8434afa-61e1-42ab-9856-5c4ca12d855e-kube-api-access-7ws7v" (OuterVolumeSpecName: "kube-api-access-7ws7v") pod "e8434afa-61e1-42ab-9856-5c4ca12d855e" (UID: "e8434afa-61e1-42ab-9856-5c4ca12d855e"). InnerVolumeSpecName "kube-api-access-7ws7v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:19:28 crc kubenswrapper[5107]: I0220 00:19:28.164669 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8434afa-61e1-42ab-9856-5c4ca12d855e-util" (OuterVolumeSpecName: "util") pod "e8434afa-61e1-42ab-9856-5c4ca12d855e" (UID: "e8434afa-61e1-42ab-9856-5c4ca12d855e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:19:28 crc kubenswrapper[5107]: I0220 00:19:28.254884 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7ws7v\" (UniqueName: \"kubernetes.io/projected/e8434afa-61e1-42ab-9856-5c4ca12d855e-kube-api-access-7ws7v\") on node \"crc\" DevicePath \"\"" Feb 20 00:19:28 crc kubenswrapper[5107]: I0220 00:19:28.254941 5107 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e8434afa-61e1-42ab-9856-5c4ca12d855e-bundle\") on node \"crc\" DevicePath \"\"" Feb 20 00:19:28 crc kubenswrapper[5107]: I0220 00:19:28.254960 5107 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e8434afa-61e1-42ab-9856-5c4ca12d855e-util\") on node \"crc\" DevicePath \"\"" Feb 20 00:19:28 crc kubenswrapper[5107]: I0220 00:19:28.658769 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xn75s" Feb 20 00:19:28 crc kubenswrapper[5107]: I0220 00:19:28.658860 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xn75s" event={"ID":"e8434afa-61e1-42ab-9856-5c4ca12d855e","Type":"ContainerDied","Data":"d8c9679ba465b3dbb7844b54d7ad63a3f93172149cba499f95d2c0d599bb5526"} Feb 20 00:19:28 crc kubenswrapper[5107]: I0220 00:19:28.658907 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8c9679ba465b3dbb7844b54d7ad63a3f93172149cba499f95d2c0d599bb5526" Feb 20 00:19:28 crc kubenswrapper[5107]: I0220 00:19:28.661733 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12mtwf"] Feb 20 00:19:28 crc kubenswrapper[5107]: I0220 00:19:28.662364 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e8434afa-61e1-42ab-9856-5c4ca12d855e" containerName="pull" Feb 20 00:19:28 crc kubenswrapper[5107]: I0220 00:19:28.662383 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8434afa-61e1-42ab-9856-5c4ca12d855e" containerName="pull" Feb 20 00:19:28 crc kubenswrapper[5107]: I0220 00:19:28.662406 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e8434afa-61e1-42ab-9856-5c4ca12d855e" containerName="util" Feb 20 00:19:28 crc kubenswrapper[5107]: I0220 00:19:28.662413 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8434afa-61e1-42ab-9856-5c4ca12d855e" containerName="util" Feb 20 00:19:28 crc kubenswrapper[5107]: I0220 00:19:28.662424 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e8434afa-61e1-42ab-9856-5c4ca12d855e" containerName="extract" Feb 20 00:19:28 crc kubenswrapper[5107]: I0220 00:19:28.662430 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8434afa-61e1-42ab-9856-5c4ca12d855e" containerName="extract" Feb 20 00:19:28 crc kubenswrapper[5107]: I0220 00:19:28.662530 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="e8434afa-61e1-42ab-9856-5c4ca12d855e" containerName="extract" Feb 20 00:19:28 crc kubenswrapper[5107]: I0220 00:19:28.675593 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12mtwf"] Feb 20 00:19:28 crc kubenswrapper[5107]: I0220 00:19:28.675762 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12mtwf" Feb 20 00:19:28 crc kubenswrapper[5107]: I0220 00:19:28.759523 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f448c99d-c280-4172-8725-2e9de584ef00-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12mtwf\" (UID: \"f448c99d-c280-4172-8725-2e9de584ef00\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12mtwf" Feb 20 00:19:28 crc kubenswrapper[5107]: I0220 00:19:28.759611 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f448c99d-c280-4172-8725-2e9de584ef00-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12mtwf\" (UID: \"f448c99d-c280-4172-8725-2e9de584ef00\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12mtwf" Feb 20 00:19:28 crc kubenswrapper[5107]: I0220 00:19:28.759695 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhfrw\" (UniqueName: \"kubernetes.io/projected/f448c99d-c280-4172-8725-2e9de584ef00-kube-api-access-zhfrw\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12mtwf\" (UID: \"f448c99d-c280-4172-8725-2e9de584ef00\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12mtwf" Feb 20 00:19:28 crc kubenswrapper[5107]: I0220 00:19:28.861094 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f448c99d-c280-4172-8725-2e9de584ef00-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12mtwf\" (UID: \"f448c99d-c280-4172-8725-2e9de584ef00\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12mtwf" Feb 20 00:19:28 crc kubenswrapper[5107]: I0220 00:19:28.861172 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f448c99d-c280-4172-8725-2e9de584ef00-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12mtwf\" (UID: \"f448c99d-c280-4172-8725-2e9de584ef00\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12mtwf" Feb 20 00:19:28 crc kubenswrapper[5107]: I0220 00:19:28.861206 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zhfrw\" (UniqueName: \"kubernetes.io/projected/f448c99d-c280-4172-8725-2e9de584ef00-kube-api-access-zhfrw\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12mtwf\" (UID: \"f448c99d-c280-4172-8725-2e9de584ef00\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12mtwf" Feb 20 00:19:28 crc kubenswrapper[5107]: I0220 00:19:28.861689 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f448c99d-c280-4172-8725-2e9de584ef00-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12mtwf\" (UID: \"f448c99d-c280-4172-8725-2e9de584ef00\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12mtwf" Feb 20 00:19:28 crc kubenswrapper[5107]: I0220 00:19:28.862033 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f448c99d-c280-4172-8725-2e9de584ef00-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12mtwf\" (UID: \"f448c99d-c280-4172-8725-2e9de584ef00\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12mtwf" Feb 20 00:19:28 crc kubenswrapper[5107]: I0220 00:19:28.877919 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhfrw\" (UniqueName: \"kubernetes.io/projected/f448c99d-c280-4172-8725-2e9de584ef00-kube-api-access-zhfrw\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12mtwf\" (UID: \"f448c99d-c280-4172-8725-2e9de584ef00\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12mtwf" Feb 20 00:19:28 crc kubenswrapper[5107]: I0220 00:19:28.998516 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12mtwf" Feb 20 00:19:29 crc kubenswrapper[5107]: I0220 00:19:29.695500 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fndqhp"] Feb 20 00:19:30 crc kubenswrapper[5107]: I0220 00:19:30.163267 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fndqhp"] Feb 20 00:19:30 crc kubenswrapper[5107]: I0220 00:19:30.163445 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fndqhp" Feb 20 00:19:30 crc kubenswrapper[5107]: I0220 00:19:30.284443 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fndqhp\" (UID: \"25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fndqhp" Feb 20 00:19:30 crc kubenswrapper[5107]: I0220 00:19:30.284528 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fndqhp\" (UID: \"25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fndqhp" Feb 20 00:19:30 crc kubenswrapper[5107]: I0220 00:19:30.284606 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dw6f\" (UniqueName: \"kubernetes.io/projected/25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c-kube-api-access-7dw6f\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fndqhp\" (UID: \"25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fndqhp" Feb 20 00:19:30 crc kubenswrapper[5107]: I0220 00:19:30.385569 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fndqhp\" (UID: \"25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fndqhp" Feb 20 00:19:30 crc kubenswrapper[5107]: I0220 00:19:30.385947 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7dw6f\" (UniqueName: \"kubernetes.io/projected/25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c-kube-api-access-7dw6f\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fndqhp\" (UID: \"25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fndqhp" Feb 20 00:19:30 crc kubenswrapper[5107]: I0220 00:19:30.386007 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fndqhp\" (UID: \"25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fndqhp" Feb 20 00:19:30 crc kubenswrapper[5107]: I0220 00:19:30.386616 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fndqhp\" (UID: \"25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fndqhp" Feb 20 00:19:30 crc kubenswrapper[5107]: I0220 00:19:30.386667 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fndqhp\" (UID: \"25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fndqhp" Feb 20 00:19:30 crc kubenswrapper[5107]: I0220 00:19:30.407365 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dw6f\" (UniqueName: \"kubernetes.io/projected/25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c-kube-api-access-7dw6f\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fndqhp\" (UID: \"25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fndqhp" Feb 20 00:19:30 crc kubenswrapper[5107]: I0220 00:19:30.495263 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fndqhp" Feb 20 00:19:30 crc kubenswrapper[5107]: I0220 00:19:30.680199 5107 generic.go:358] "Generic (PLEG): container finished" podID="918d6f5a-f717-46f4-b49b-b057f68828da" containerID="61bbb7617224c7a564d679609e7f86361a680a7c244342fecbb91aeca4be5638" exitCode=0 Feb 20 00:19:30 crc kubenswrapper[5107]: I0220 00:19:30.680301 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hjbc7" event={"ID":"918d6f5a-f717-46f4-b49b-b057f68828da","Type":"ContainerDied","Data":"61bbb7617224c7a564d679609e7f86361a680a7c244342fecbb91aeca4be5638"} Feb 20 00:19:30 crc kubenswrapper[5107]: I0220 00:19:30.703821 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12mtwf"] Feb 20 00:19:30 crc kubenswrapper[5107]: I0220 00:19:30.952351 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fndqhp"] Feb 20 00:19:31 crc kubenswrapper[5107]: W0220 00:19:31.009355 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25f9a9ab_c012_4d7e_a60b_7c3f5a13e82c.slice/crio-adb07b3f9f1ab0cf0ed4595f77e87f8c39fe98722de91459814adacdcc799e33 WatchSource:0}: Error finding container adb07b3f9f1ab0cf0ed4595f77e87f8c39fe98722de91459814adacdcc799e33: Status 404 returned error can't find the container with id adb07b3f9f1ab0cf0ed4595f77e87f8c39fe98722de91459814adacdcc799e33 Feb 20 00:19:31 crc kubenswrapper[5107]: I0220 00:19:31.690603 5107 generic.go:358] "Generic (PLEG): container finished" podID="f448c99d-c280-4172-8725-2e9de584ef00" containerID="ad726fc304f327f9899a823abcd327a0e26705a822d7e76243b958d07aed41f0" exitCode=0 Feb 20 00:19:31 crc kubenswrapper[5107]: I0220 00:19:31.690682 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12mtwf" event={"ID":"f448c99d-c280-4172-8725-2e9de584ef00","Type":"ContainerDied","Data":"ad726fc304f327f9899a823abcd327a0e26705a822d7e76243b958d07aed41f0"} Feb 20 00:19:31 crc kubenswrapper[5107]: I0220 00:19:31.691189 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12mtwf" event={"ID":"f448c99d-c280-4172-8725-2e9de584ef00","Type":"ContainerStarted","Data":"19e1fdb2b74c8e620efaa20fa3c7cda5c77ab8d199c65eacc591fd81a639f7ef"} Feb 20 00:19:31 crc kubenswrapper[5107]: I0220 00:19:31.694723 5107 generic.go:358] "Generic (PLEG): container finished" podID="918d6f5a-f717-46f4-b49b-b057f68828da" containerID="460b94ce2a6f5a55cdb0f25a782b6f71aa2eaf7efc4b171fdfc057feed4eec46" exitCode=0 Feb 20 00:19:31 crc kubenswrapper[5107]: I0220 00:19:31.694852 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hjbc7" event={"ID":"918d6f5a-f717-46f4-b49b-b057f68828da","Type":"ContainerDied","Data":"460b94ce2a6f5a55cdb0f25a782b6f71aa2eaf7efc4b171fdfc057feed4eec46"} Feb 20 00:19:31 crc kubenswrapper[5107]: I0220 00:19:31.696690 5107 generic.go:358] "Generic (PLEG): container finished" podID="25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c" containerID="b250e0e1ec110aea6ae6359b214da056270d3d15e6d1ff45527d801133d4b7c9" exitCode=0 Feb 20 00:19:31 crc kubenswrapper[5107]: I0220 00:19:31.696782 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fndqhp" event={"ID":"25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c","Type":"ContainerDied","Data":"b250e0e1ec110aea6ae6359b214da056270d3d15e6d1ff45527d801133d4b7c9"} Feb 20 00:19:31 crc kubenswrapper[5107]: I0220 00:19:31.696868 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fndqhp" event={"ID":"25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c","Type":"ContainerStarted","Data":"adb07b3f9f1ab0cf0ed4595f77e87f8c39fe98722de91459814adacdcc799e33"} Feb 20 00:19:32 crc kubenswrapper[5107]: I0220 00:19:32.824254 5107 patch_prober.go:28] interesting pod/machine-config-daemon-5bqkx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:19:32 crc kubenswrapper[5107]: I0220 00:19:32.824640 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:19:33 crc kubenswrapper[5107]: I0220 00:19:33.114849 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hjbc7" Feb 20 00:19:33 crc kubenswrapper[5107]: I0220 00:19:33.247591 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/918d6f5a-f717-46f4-b49b-b057f68828da-bundle\") pod \"918d6f5a-f717-46f4-b49b-b057f68828da\" (UID: \"918d6f5a-f717-46f4-b49b-b057f68828da\") " Feb 20 00:19:33 crc kubenswrapper[5107]: I0220 00:19:33.247721 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqjjk\" (UniqueName: \"kubernetes.io/projected/918d6f5a-f717-46f4-b49b-b057f68828da-kube-api-access-lqjjk\") pod \"918d6f5a-f717-46f4-b49b-b057f68828da\" (UID: \"918d6f5a-f717-46f4-b49b-b057f68828da\") " Feb 20 00:19:33 crc kubenswrapper[5107]: I0220 00:19:33.247825 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/918d6f5a-f717-46f4-b49b-b057f68828da-util\") pod \"918d6f5a-f717-46f4-b49b-b057f68828da\" (UID: \"918d6f5a-f717-46f4-b49b-b057f68828da\") " Feb 20 00:19:33 crc kubenswrapper[5107]: I0220 00:19:33.248907 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/918d6f5a-f717-46f4-b49b-b057f68828da-bundle" (OuterVolumeSpecName: "bundle") pod "918d6f5a-f717-46f4-b49b-b057f68828da" (UID: "918d6f5a-f717-46f4-b49b-b057f68828da"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:19:33 crc kubenswrapper[5107]: I0220 00:19:33.256296 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/918d6f5a-f717-46f4-b49b-b057f68828da-kube-api-access-lqjjk" (OuterVolumeSpecName: "kube-api-access-lqjjk") pod "918d6f5a-f717-46f4-b49b-b057f68828da" (UID: "918d6f5a-f717-46f4-b49b-b057f68828da"). InnerVolumeSpecName "kube-api-access-lqjjk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:19:33 crc kubenswrapper[5107]: I0220 00:19:33.261815 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/918d6f5a-f717-46f4-b49b-b057f68828da-util" (OuterVolumeSpecName: "util") pod "918d6f5a-f717-46f4-b49b-b057f68828da" (UID: "918d6f5a-f717-46f4-b49b-b057f68828da"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:19:33 crc kubenswrapper[5107]: I0220 00:19:33.349676 5107 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/918d6f5a-f717-46f4-b49b-b057f68828da-util\") on node \"crc\" DevicePath \"\"" Feb 20 00:19:33 crc kubenswrapper[5107]: I0220 00:19:33.349708 5107 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/918d6f5a-f717-46f4-b49b-b057f68828da-bundle\") on node \"crc\" DevicePath \"\"" Feb 20 00:19:33 crc kubenswrapper[5107]: I0220 00:19:33.349717 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lqjjk\" (UniqueName: \"kubernetes.io/projected/918d6f5a-f717-46f4-b49b-b057f68828da-kube-api-access-lqjjk\") on node \"crc\" DevicePath \"\"" Feb 20 00:19:33 crc kubenswrapper[5107]: I0220 00:19:33.717283 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hjbc7" event={"ID":"918d6f5a-f717-46f4-b49b-b057f68828da","Type":"ContainerDied","Data":"2acc569f156897aecebf3bdcec4e657d8aaa51da99424a0a34bbbdf73868cd21"} Feb 20 00:19:33 crc kubenswrapper[5107]: I0220 00:19:33.717609 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2acc569f156897aecebf3bdcec4e657d8aaa51da99424a0a34bbbdf73868cd21" Feb 20 00:19:33 crc kubenswrapper[5107]: I0220 00:19:33.717490 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hjbc7" Feb 20 00:19:33 crc kubenswrapper[5107]: I0220 00:19:33.730222 5107 generic.go:358] "Generic (PLEG): container finished" podID="25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c" containerID="8d667e7d9e9aa4217a9b286b29f1a659bd7be402814e06081ea79d1ae2a56374" exitCode=0 Feb 20 00:19:33 crc kubenswrapper[5107]: I0220 00:19:33.730265 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fndqhp" event={"ID":"25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c","Type":"ContainerDied","Data":"8d667e7d9e9aa4217a9b286b29f1a659bd7be402814e06081ea79d1ae2a56374"} Feb 20 00:19:34 crc kubenswrapper[5107]: I0220 00:19:34.736446 5107 generic.go:358] "Generic (PLEG): container finished" podID="25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c" containerID="218d0dbba20910e5edd07adc2369a74cec421739a85c0131201f536571b202c2" exitCode=0 Feb 20 00:19:34 crc kubenswrapper[5107]: I0220 00:19:34.736550 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fndqhp" event={"ID":"25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c","Type":"ContainerDied","Data":"218d0dbba20910e5edd07adc2369a74cec421739a85c0131201f536571b202c2"} Feb 20 00:19:34 crc kubenswrapper[5107]: I0220 00:19:34.738629 5107 generic.go:358] "Generic (PLEG): container finished" podID="f448c99d-c280-4172-8725-2e9de584ef00" containerID="f71fface3e0d97ec18bb0d413cea1a4647357844e0ad61f845406db881d9b64d" exitCode=0 Feb 20 00:19:34 crc kubenswrapper[5107]: I0220 00:19:34.738654 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12mtwf" event={"ID":"f448c99d-c280-4172-8725-2e9de584ef00","Type":"ContainerDied","Data":"f71fface3e0d97ec18bb0d413cea1a4647357844e0ad61f845406db881d9b64d"} Feb 20 00:19:35 crc kubenswrapper[5107]: I0220 00:19:35.746511 5107 generic.go:358] "Generic (PLEG): container finished" podID="f448c99d-c280-4172-8725-2e9de584ef00" containerID="14d667b81e68262eb9b0f109ab244991e526fc7d1117cbbc159ea8466cea78a7" exitCode=0 Feb 20 00:19:35 crc kubenswrapper[5107]: I0220 00:19:35.746604 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12mtwf" event={"ID":"f448c99d-c280-4172-8725-2e9de584ef00","Type":"ContainerDied","Data":"14d667b81e68262eb9b0f109ab244991e526fc7d1117cbbc159ea8466cea78a7"} Feb 20 00:19:36 crc kubenswrapper[5107]: I0220 00:19:36.185078 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fndqhp" Feb 20 00:19:36 crc kubenswrapper[5107]: I0220 00:19:36.289047 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dw6f\" (UniqueName: \"kubernetes.io/projected/25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c-kube-api-access-7dw6f\") pod \"25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c\" (UID: \"25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c\") " Feb 20 00:19:36 crc kubenswrapper[5107]: I0220 00:19:36.289092 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c-bundle\") pod \"25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c\" (UID: \"25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c\") " Feb 20 00:19:36 crc kubenswrapper[5107]: I0220 00:19:36.289250 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c-util\") pod \"25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c\" (UID: \"25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c\") " Feb 20 00:19:36 crc kubenswrapper[5107]: I0220 00:19:36.290093 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c-bundle" (OuterVolumeSpecName: "bundle") pod "25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c" (UID: "25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:19:36 crc kubenswrapper[5107]: I0220 00:19:36.302692 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c-util" (OuterVolumeSpecName: "util") pod "25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c" (UID: "25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:19:36 crc kubenswrapper[5107]: I0220 00:19:36.309386 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c-kube-api-access-7dw6f" (OuterVolumeSpecName: "kube-api-access-7dw6f") pod "25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c" (UID: "25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c"). InnerVolumeSpecName "kube-api-access-7dw6f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:19:36 crc kubenswrapper[5107]: I0220 00:19:36.390395 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7dw6f\" (UniqueName: \"kubernetes.io/projected/25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c-kube-api-access-7dw6f\") on node \"crc\" DevicePath \"\"" Feb 20 00:19:36 crc kubenswrapper[5107]: I0220 00:19:36.390437 5107 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c-bundle\") on node \"crc\" DevicePath \"\"" Feb 20 00:19:36 crc kubenswrapper[5107]: I0220 00:19:36.390446 5107 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c-util\") on node \"crc\" DevicePath \"\"" Feb 20 00:19:36 crc kubenswrapper[5107]: I0220 00:19:36.758537 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fndqhp" Feb 20 00:19:36 crc kubenswrapper[5107]: I0220 00:19:36.759568 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fndqhp" event={"ID":"25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c","Type":"ContainerDied","Data":"adb07b3f9f1ab0cf0ed4595f77e87f8c39fe98722de91459814adacdcc799e33"} Feb 20 00:19:36 crc kubenswrapper[5107]: I0220 00:19:36.759611 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adb07b3f9f1ab0cf0ed4595f77e87f8c39fe98722de91459814adacdcc799e33" Feb 20 00:19:37 crc kubenswrapper[5107]: I0220 00:19:37.179908 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12mtwf" Feb 20 00:19:37 crc kubenswrapper[5107]: I0220 00:19:37.199539 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f448c99d-c280-4172-8725-2e9de584ef00-util\") pod \"f448c99d-c280-4172-8725-2e9de584ef00\" (UID: \"f448c99d-c280-4172-8725-2e9de584ef00\") " Feb 20 00:19:37 crc kubenswrapper[5107]: I0220 00:19:37.199662 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhfrw\" (UniqueName: \"kubernetes.io/projected/f448c99d-c280-4172-8725-2e9de584ef00-kube-api-access-zhfrw\") pod \"f448c99d-c280-4172-8725-2e9de584ef00\" (UID: \"f448c99d-c280-4172-8725-2e9de584ef00\") " Feb 20 00:19:37 crc kubenswrapper[5107]: I0220 00:19:37.199690 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f448c99d-c280-4172-8725-2e9de584ef00-bundle\") pod \"f448c99d-c280-4172-8725-2e9de584ef00\" (UID: \"f448c99d-c280-4172-8725-2e9de584ef00\") " Feb 20 00:19:37 crc kubenswrapper[5107]: I0220 00:19:37.200679 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f448c99d-c280-4172-8725-2e9de584ef00-bundle" (OuterVolumeSpecName: "bundle") pod "f448c99d-c280-4172-8725-2e9de584ef00" (UID: "f448c99d-c280-4172-8725-2e9de584ef00"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:19:37 crc kubenswrapper[5107]: I0220 00:19:37.214367 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f448c99d-c280-4172-8725-2e9de584ef00-kube-api-access-zhfrw" (OuterVolumeSpecName: "kube-api-access-zhfrw") pod "f448c99d-c280-4172-8725-2e9de584ef00" (UID: "f448c99d-c280-4172-8725-2e9de584ef00"). InnerVolumeSpecName "kube-api-access-zhfrw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:19:37 crc kubenswrapper[5107]: I0220 00:19:37.215584 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f448c99d-c280-4172-8725-2e9de584ef00-util" (OuterVolumeSpecName: "util") pod "f448c99d-c280-4172-8725-2e9de584ef00" (UID: "f448c99d-c280-4172-8725-2e9de584ef00"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:19:37 crc kubenswrapper[5107]: I0220 00:19:37.301780 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zhfrw\" (UniqueName: \"kubernetes.io/projected/f448c99d-c280-4172-8725-2e9de584ef00-kube-api-access-zhfrw\") on node \"crc\" DevicePath \"\"" Feb 20 00:19:37 crc kubenswrapper[5107]: I0220 00:19:37.301817 5107 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f448c99d-c280-4172-8725-2e9de584ef00-bundle\") on node \"crc\" DevicePath \"\"" Feb 20 00:19:37 crc kubenswrapper[5107]: I0220 00:19:37.301830 5107 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f448c99d-c280-4172-8725-2e9de584ef00-util\") on node \"crc\" DevicePath \"\"" Feb 20 00:19:37 crc kubenswrapper[5107]: I0220 00:19:37.764530 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12mtwf" Feb 20 00:19:37 crc kubenswrapper[5107]: I0220 00:19:37.764565 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12mtwf" event={"ID":"f448c99d-c280-4172-8725-2e9de584ef00","Type":"ContainerDied","Data":"19e1fdb2b74c8e620efaa20fa3c7cda5c77ab8d199c65eacc591fd81a639f7ef"} Feb 20 00:19:37 crc kubenswrapper[5107]: I0220 00:19:37.764987 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19e1fdb2b74c8e620efaa20fa3c7cda5c77ab8d199c65eacc591fd81a639f7ef" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.581609 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-dqzb2"] Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.583512 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c" containerName="pull" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.583547 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c" containerName="pull" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.583560 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f448c99d-c280-4172-8725-2e9de584ef00" containerName="util" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.583570 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="f448c99d-c280-4172-8725-2e9de584ef00" containerName="util" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.583599 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c" containerName="util" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.583609 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c" containerName="util" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.583638 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c" containerName="extract" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.583649 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c" containerName="extract" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.583674 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="918d6f5a-f717-46f4-b49b-b057f68828da" containerName="util" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.583684 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="918d6f5a-f717-46f4-b49b-b057f68828da" containerName="util" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.583695 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="918d6f5a-f717-46f4-b49b-b057f68828da" containerName="extract" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.583705 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="918d6f5a-f717-46f4-b49b-b057f68828da" containerName="extract" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.583721 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f448c99d-c280-4172-8725-2e9de584ef00" containerName="pull" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.583732 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="f448c99d-c280-4172-8725-2e9de584ef00" containerName="pull" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.583748 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f448c99d-c280-4172-8725-2e9de584ef00" containerName="extract" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.583760 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="f448c99d-c280-4172-8725-2e9de584ef00" containerName="extract" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.583770 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="918d6f5a-f717-46f4-b49b-b057f68828da" containerName="pull" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.583779 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="918d6f5a-f717-46f4-b49b-b057f68828da" containerName="pull" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.583932 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="f448c99d-c280-4172-8725-2e9de584ef00" containerName="extract" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.583951 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c" containerName="extract" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.583975 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="918d6f5a-f717-46f4-b49b-b057f68828da" containerName="extract" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.592898 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-dqzb2" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.598558 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.598558 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.598732 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-6vz4c\"" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.599274 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-dqzb2"] Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.638409 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7c6c7796b5-fjpqg"] Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.647847 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c6c7796b5-fjpqg" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.649879 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-j47hh\"" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.650163 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.651390 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7c6c7796b5-fjpqg"] Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.657088 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7c6c7796b5-r5vcm"] Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.661784 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c6c7796b5-r5vcm" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.703807 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7c6c7796b5-r5vcm"] Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.743824 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hczxr\" (UniqueName: \"kubernetes.io/projected/d8430d39-cf33-43a4-922b-46a1456aecfc-kube-api-access-hczxr\") pod \"obo-prometheus-operator-9bc85b4bf-dqzb2\" (UID: \"d8430d39-cf33-43a4-922b-46a1456aecfc\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-dqzb2" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.824638 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-85c68dddb-22brv"] Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.828240 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-22brv" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.830439 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-ndrgc\"" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.830840 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.844564 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0a556d1d-e76d-4c41-aa39-cd3da09d0fc4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7c6c7796b5-fjpqg\" (UID: \"0a556d1d-e76d-4c41-aa39-cd3da09d0fc4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c6c7796b5-fjpqg" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.844630 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/868305ca-ba59-4b0f-9887-cc5967dd4e1e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7c6c7796b5-r5vcm\" (UID: \"868305ca-ba59-4b0f-9887-cc5967dd4e1e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c6c7796b5-r5vcm" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.844670 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hczxr\" (UniqueName: \"kubernetes.io/projected/d8430d39-cf33-43a4-922b-46a1456aecfc-kube-api-access-hczxr\") pod \"obo-prometheus-operator-9bc85b4bf-dqzb2\" (UID: \"d8430d39-cf33-43a4-922b-46a1456aecfc\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-dqzb2" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.844735 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/868305ca-ba59-4b0f-9887-cc5967dd4e1e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7c6c7796b5-r5vcm\" (UID: \"868305ca-ba59-4b0f-9887-cc5967dd4e1e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c6c7796b5-r5vcm" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.844846 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0a556d1d-e76d-4c41-aa39-cd3da09d0fc4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7c6c7796b5-fjpqg\" (UID: \"0a556d1d-e76d-4c41-aa39-cd3da09d0fc4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c6c7796b5-fjpqg" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.861089 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-22brv"] Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.868787 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hczxr\" (UniqueName: \"kubernetes.io/projected/d8430d39-cf33-43a4-922b-46a1456aecfc-kube-api-access-hczxr\") pod \"obo-prometheus-operator-9bc85b4bf-dqzb2\" (UID: \"d8430d39-cf33-43a4-922b-46a1456aecfc\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-dqzb2" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.912995 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-dqzb2" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.948802 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/868305ca-ba59-4b0f-9887-cc5967dd4e1e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7c6c7796b5-r5vcm\" (UID: \"868305ca-ba59-4b0f-9887-cc5967dd4e1e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c6c7796b5-r5vcm" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.948855 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/aedefad2-67f0-4b49-bb87-80b19cf0faf5-observability-operator-tls\") pod \"observability-operator-85c68dddb-22brv\" (UID: \"aedefad2-67f0-4b49-bb87-80b19cf0faf5\") " pod="openshift-operators/observability-operator-85c68dddb-22brv" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.948900 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/868305ca-ba59-4b0f-9887-cc5967dd4e1e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7c6c7796b5-r5vcm\" (UID: \"868305ca-ba59-4b0f-9887-cc5967dd4e1e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c6c7796b5-r5vcm" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.948933 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0a556d1d-e76d-4c41-aa39-cd3da09d0fc4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7c6c7796b5-fjpqg\" (UID: \"0a556d1d-e76d-4c41-aa39-cd3da09d0fc4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c6c7796b5-fjpqg" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.949186 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5whx\" (UniqueName: \"kubernetes.io/projected/aedefad2-67f0-4b49-bb87-80b19cf0faf5-kube-api-access-d5whx\") pod \"observability-operator-85c68dddb-22brv\" (UID: \"aedefad2-67f0-4b49-bb87-80b19cf0faf5\") " pod="openshift-operators/observability-operator-85c68dddb-22brv" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.949311 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0a556d1d-e76d-4c41-aa39-cd3da09d0fc4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7c6c7796b5-fjpqg\" (UID: \"0a556d1d-e76d-4c41-aa39-cd3da09d0fc4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c6c7796b5-fjpqg" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.953250 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/868305ca-ba59-4b0f-9887-cc5967dd4e1e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7c6c7796b5-r5vcm\" (UID: \"868305ca-ba59-4b0f-9887-cc5967dd4e1e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c6c7796b5-r5vcm" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.955359 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0a556d1d-e76d-4c41-aa39-cd3da09d0fc4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7c6c7796b5-fjpqg\" (UID: \"0a556d1d-e76d-4c41-aa39-cd3da09d0fc4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c6c7796b5-fjpqg" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.955376 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/868305ca-ba59-4b0f-9887-cc5967dd4e1e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7c6c7796b5-r5vcm\" (UID: \"868305ca-ba59-4b0f-9887-cc5967dd4e1e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c6c7796b5-r5vcm" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.963780 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0a556d1d-e76d-4c41-aa39-cd3da09d0fc4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7c6c7796b5-fjpqg\" (UID: \"0a556d1d-e76d-4c41-aa39-cd3da09d0fc4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c6c7796b5-fjpqg" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.967293 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c6c7796b5-fjpqg" Feb 20 00:19:40 crc kubenswrapper[5107]: I0220 00:19:40.997384 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-2dsh6"] Feb 20 00:19:41 crc kubenswrapper[5107]: I0220 00:19:41.000361 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c6c7796b5-r5vcm" Feb 20 00:19:41 crc kubenswrapper[5107]: I0220 00:19:41.006414 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-2dsh6" Feb 20 00:19:41 crc kubenswrapper[5107]: I0220 00:19:41.009563 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-gvcvl\"" Feb 20 00:19:41 crc kubenswrapper[5107]: I0220 00:19:41.009619 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-2dsh6"] Feb 20 00:19:41 crc kubenswrapper[5107]: I0220 00:19:41.050665 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/aedefad2-67f0-4b49-bb87-80b19cf0faf5-observability-operator-tls\") pod \"observability-operator-85c68dddb-22brv\" (UID: \"aedefad2-67f0-4b49-bb87-80b19cf0faf5\") " pod="openshift-operators/observability-operator-85c68dddb-22brv" Feb 20 00:19:41 crc kubenswrapper[5107]: I0220 00:19:41.050768 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d5whx\" (UniqueName: \"kubernetes.io/projected/aedefad2-67f0-4b49-bb87-80b19cf0faf5-kube-api-access-d5whx\") pod \"observability-operator-85c68dddb-22brv\" (UID: \"aedefad2-67f0-4b49-bb87-80b19cf0faf5\") " pod="openshift-operators/observability-operator-85c68dddb-22brv" Feb 20 00:19:41 crc kubenswrapper[5107]: I0220 00:19:41.064996 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/aedefad2-67f0-4b49-bb87-80b19cf0faf5-observability-operator-tls\") pod \"observability-operator-85c68dddb-22brv\" (UID: \"aedefad2-67f0-4b49-bb87-80b19cf0faf5\") " pod="openshift-operators/observability-operator-85c68dddb-22brv" Feb 20 00:19:41 crc kubenswrapper[5107]: I0220 00:19:41.083704 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5whx\" (UniqueName: \"kubernetes.io/projected/aedefad2-67f0-4b49-bb87-80b19cf0faf5-kube-api-access-d5whx\") pod \"observability-operator-85c68dddb-22brv\" (UID: \"aedefad2-67f0-4b49-bb87-80b19cf0faf5\") " pod="openshift-operators/observability-operator-85c68dddb-22brv" Feb 20 00:19:41 crc kubenswrapper[5107]: I0220 00:19:41.141649 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-22brv" Feb 20 00:19:41 crc kubenswrapper[5107]: I0220 00:19:41.151695 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5679dc1f-7e3e-4370-998c-1cbb4b0fad69-openshift-service-ca\") pod \"perses-operator-669c9f96b5-2dsh6\" (UID: \"5679dc1f-7e3e-4370-998c-1cbb4b0fad69\") " pod="openshift-operators/perses-operator-669c9f96b5-2dsh6" Feb 20 00:19:41 crc kubenswrapper[5107]: I0220 00:19:41.151752 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b854t\" (UniqueName: \"kubernetes.io/projected/5679dc1f-7e3e-4370-998c-1cbb4b0fad69-kube-api-access-b854t\") pod \"perses-operator-669c9f96b5-2dsh6\" (UID: \"5679dc1f-7e3e-4370-998c-1cbb4b0fad69\") " pod="openshift-operators/perses-operator-669c9f96b5-2dsh6" Feb 20 00:19:41 crc kubenswrapper[5107]: I0220 00:19:41.253274 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5679dc1f-7e3e-4370-998c-1cbb4b0fad69-openshift-service-ca\") pod \"perses-operator-669c9f96b5-2dsh6\" (UID: \"5679dc1f-7e3e-4370-998c-1cbb4b0fad69\") " pod="openshift-operators/perses-operator-669c9f96b5-2dsh6" Feb 20 00:19:41 crc kubenswrapper[5107]: I0220 00:19:41.253360 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b854t\" (UniqueName: \"kubernetes.io/projected/5679dc1f-7e3e-4370-998c-1cbb4b0fad69-kube-api-access-b854t\") pod \"perses-operator-669c9f96b5-2dsh6\" (UID: \"5679dc1f-7e3e-4370-998c-1cbb4b0fad69\") " pod="openshift-operators/perses-operator-669c9f96b5-2dsh6" Feb 20 00:19:41 crc kubenswrapper[5107]: I0220 00:19:41.255357 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5679dc1f-7e3e-4370-998c-1cbb4b0fad69-openshift-service-ca\") pod \"perses-operator-669c9f96b5-2dsh6\" (UID: \"5679dc1f-7e3e-4370-998c-1cbb4b0fad69\") " pod="openshift-operators/perses-operator-669c9f96b5-2dsh6" Feb 20 00:19:41 crc kubenswrapper[5107]: I0220 00:19:41.277459 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b854t\" (UniqueName: \"kubernetes.io/projected/5679dc1f-7e3e-4370-998c-1cbb4b0fad69-kube-api-access-b854t\") pod \"perses-operator-669c9f96b5-2dsh6\" (UID: \"5679dc1f-7e3e-4370-998c-1cbb4b0fad69\") " pod="openshift-operators/perses-operator-669c9f96b5-2dsh6" Feb 20 00:19:41 crc kubenswrapper[5107]: I0220 00:19:41.339553 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-2dsh6" Feb 20 00:19:41 crc kubenswrapper[5107]: I0220 00:19:41.420679 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-dqzb2"] Feb 20 00:19:41 crc kubenswrapper[5107]: I0220 00:19:41.517210 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7c6c7796b5-r5vcm"] Feb 20 00:19:41 crc kubenswrapper[5107]: I0220 00:19:41.526410 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7c6c7796b5-fjpqg"] Feb 20 00:19:41 crc kubenswrapper[5107]: W0220 00:19:41.538693 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a556d1d_e76d_4c41_aa39_cd3da09d0fc4.slice/crio-bb820a097086ae04a7faff1f1addd90d4e560c69d855f3749a4924eafcca64b3 WatchSource:0}: Error finding container bb820a097086ae04a7faff1f1addd90d4e560c69d855f3749a4924eafcca64b3: Status 404 returned error can't find the container with id bb820a097086ae04a7faff1f1addd90d4e560c69d855f3749a4924eafcca64b3 Feb 20 00:19:41 crc kubenswrapper[5107]: W0220 00:19:41.542899 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod868305ca_ba59_4b0f_9887_cc5967dd4e1e.slice/crio-5726af7596067b6007d63c77f7be3123e603a7a8f2aecfbaee8603bcbe78245b WatchSource:0}: Error finding container 5726af7596067b6007d63c77f7be3123e603a7a8f2aecfbaee8603bcbe78245b: Status 404 returned error can't find the container with id 5726af7596067b6007d63c77f7be3123e603a7a8f2aecfbaee8603bcbe78245b Feb 20 00:19:41 crc kubenswrapper[5107]: I0220 00:19:41.711420 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-22brv"] Feb 20 00:19:41 crc kubenswrapper[5107]: W0220 00:19:41.718318 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaedefad2_67f0_4b49_bb87_80b19cf0faf5.slice/crio-9efc917da54dfa972bbfaff5c253c5d7cf94fad9c8ce8788fee95772f9097f98 WatchSource:0}: Error finding container 9efc917da54dfa972bbfaff5c253c5d7cf94fad9c8ce8788fee95772f9097f98: Status 404 returned error can't find the container with id 9efc917da54dfa972bbfaff5c253c5d7cf94fad9c8ce8788fee95772f9097f98 Feb 20 00:19:41 crc kubenswrapper[5107]: I0220 00:19:41.788161 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c6c7796b5-fjpqg" event={"ID":"0a556d1d-e76d-4c41-aa39-cd3da09d0fc4","Type":"ContainerStarted","Data":"bb820a097086ae04a7faff1f1addd90d4e560c69d855f3749a4924eafcca64b3"} Feb 20 00:19:41 crc kubenswrapper[5107]: I0220 00:19:41.789380 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c6c7796b5-r5vcm" event={"ID":"868305ca-ba59-4b0f-9887-cc5967dd4e1e","Type":"ContainerStarted","Data":"5726af7596067b6007d63c77f7be3123e603a7a8f2aecfbaee8603bcbe78245b"} Feb 20 00:19:41 crc kubenswrapper[5107]: I0220 00:19:41.790545 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-22brv" event={"ID":"aedefad2-67f0-4b49-bb87-80b19cf0faf5","Type":"ContainerStarted","Data":"9efc917da54dfa972bbfaff5c253c5d7cf94fad9c8ce8788fee95772f9097f98"} Feb 20 00:19:41 crc kubenswrapper[5107]: I0220 00:19:41.791596 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-dqzb2" event={"ID":"d8430d39-cf33-43a4-922b-46a1456aecfc","Type":"ContainerStarted","Data":"6fdd9a3b44626ca8deb4b4e63c42dfbdad7c36e2d73f3af058be0f47f9918eef"} Feb 20 00:19:41 crc kubenswrapper[5107]: I0220 00:19:41.800307 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-2dsh6"] Feb 20 00:19:41 crc kubenswrapper[5107]: W0220 00:19:41.807137 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5679dc1f_7e3e_4370_998c_1cbb4b0fad69.slice/crio-d27a58d4664929781a46d5ce1a384de3b43eb1907adbf6cdbfef136bfd1ee269 WatchSource:0}: Error finding container d27a58d4664929781a46d5ce1a384de3b43eb1907adbf6cdbfef136bfd1ee269: Status 404 returned error can't find the container with id d27a58d4664929781a46d5ce1a384de3b43eb1907adbf6cdbfef136bfd1ee269 Feb 20 00:19:42 crc kubenswrapper[5107]: I0220 00:19:42.874609 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-2dsh6" event={"ID":"5679dc1f-7e3e-4370-998c-1cbb4b0fad69","Type":"ContainerStarted","Data":"d27a58d4664929781a46d5ce1a384de3b43eb1907adbf6cdbfef136bfd1ee269"} Feb 20 00:19:43 crc kubenswrapper[5107]: I0220 00:19:43.219264 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-md7td"] Feb 20 00:19:43 crc kubenswrapper[5107]: I0220 00:19:43.230029 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-md7td"] Feb 20 00:19:43 crc kubenswrapper[5107]: I0220 00:19:43.230203 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-md7td" Feb 20 00:19:43 crc kubenswrapper[5107]: I0220 00:19:43.234126 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Feb 20 00:19:43 crc kubenswrapper[5107]: I0220 00:19:43.234346 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Feb 20 00:19:43 crc kubenswrapper[5107]: I0220 00:19:43.234506 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-pvks4\"" Feb 20 00:19:43 crc kubenswrapper[5107]: I0220 00:19:43.301350 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4d7h\" (UniqueName: \"kubernetes.io/projected/c088d742-ebca-4f0c-8d38-5eaaa974b512-kube-api-access-s4d7h\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-md7td\" (UID: \"c088d742-ebca-4f0c-8d38-5eaaa974b512\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-md7td" Feb 20 00:19:43 crc kubenswrapper[5107]: I0220 00:19:43.301426 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c088d742-ebca-4f0c-8d38-5eaaa974b512-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-md7td\" (UID: \"c088d742-ebca-4f0c-8d38-5eaaa974b512\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-md7td" Feb 20 00:19:43 crc kubenswrapper[5107]: I0220 00:19:43.403419 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s4d7h\" (UniqueName: \"kubernetes.io/projected/c088d742-ebca-4f0c-8d38-5eaaa974b512-kube-api-access-s4d7h\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-md7td\" (UID: \"c088d742-ebca-4f0c-8d38-5eaaa974b512\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-md7td" Feb 20 00:19:43 crc kubenswrapper[5107]: I0220 00:19:43.403806 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c088d742-ebca-4f0c-8d38-5eaaa974b512-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-md7td\" (UID: \"c088d742-ebca-4f0c-8d38-5eaaa974b512\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-md7td" Feb 20 00:19:43 crc kubenswrapper[5107]: I0220 00:19:43.404391 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c088d742-ebca-4f0c-8d38-5eaaa974b512-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-md7td\" (UID: \"c088d742-ebca-4f0c-8d38-5eaaa974b512\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-md7td" Feb 20 00:19:43 crc kubenswrapper[5107]: I0220 00:19:43.440535 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4d7h\" (UniqueName: \"kubernetes.io/projected/c088d742-ebca-4f0c-8d38-5eaaa974b512-kube-api-access-s4d7h\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-md7td\" (UID: \"c088d742-ebca-4f0c-8d38-5eaaa974b512\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-md7td" Feb 20 00:19:43 crc kubenswrapper[5107]: I0220 00:19:43.567066 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-md7td" Feb 20 00:19:44 crc kubenswrapper[5107]: I0220 00:19:44.155109 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-md7td"] Feb 20 00:19:44 crc kubenswrapper[5107]: W0220 00:19:44.163883 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc088d742_ebca_4f0c_8d38_5eaaa974b512.slice/crio-9549004fc4e262e7a18f4d7b53c205ce2bc451612e0545b112795fbca9a4f07f WatchSource:0}: Error finding container 9549004fc4e262e7a18f4d7b53c205ce2bc451612e0545b112795fbca9a4f07f: Status 404 returned error can't find the container with id 9549004fc4e262e7a18f4d7b53c205ce2bc451612e0545b112795fbca9a4f07f Feb 20 00:19:44 crc kubenswrapper[5107]: I0220 00:19:44.899046 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-md7td" event={"ID":"c088d742-ebca-4f0c-8d38-5eaaa974b512","Type":"ContainerStarted","Data":"9549004fc4e262e7a18f4d7b53c205ce2bc451612e0545b112795fbca9a4f07f"} Feb 20 00:19:47 crc kubenswrapper[5107]: I0220 00:19:47.120516 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-vdjld"] Feb 20 00:19:47 crc kubenswrapper[5107]: I0220 00:19:47.134163 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-vdjld" Feb 20 00:19:47 crc kubenswrapper[5107]: I0220 00:19:47.140338 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Feb 20 00:19:47 crc kubenswrapper[5107]: I0220 00:19:47.140351 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"interconnect-operator-dockercfg-598p9\"" Feb 20 00:19:47 crc kubenswrapper[5107]: I0220 00:19:47.151267 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Feb 20 00:19:47 crc kubenswrapper[5107]: I0220 00:19:47.155997 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-vdjld"] Feb 20 00:19:47 crc kubenswrapper[5107]: I0220 00:19:47.271921 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbhbk\" (UniqueName: \"kubernetes.io/projected/36e00c5a-de32-446f-95ba-2d7a5ef5f7e3-kube-api-access-rbhbk\") pod \"interconnect-operator-78b9bd8798-vdjld\" (UID: \"36e00c5a-de32-446f-95ba-2d7a5ef5f7e3\") " pod="service-telemetry/interconnect-operator-78b9bd8798-vdjld" Feb 20 00:19:47 crc kubenswrapper[5107]: I0220 00:19:47.375805 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rbhbk\" (UniqueName: \"kubernetes.io/projected/36e00c5a-de32-446f-95ba-2d7a5ef5f7e3-kube-api-access-rbhbk\") pod \"interconnect-operator-78b9bd8798-vdjld\" (UID: \"36e00c5a-de32-446f-95ba-2d7a5ef5f7e3\") " pod="service-telemetry/interconnect-operator-78b9bd8798-vdjld" Feb 20 00:19:47 crc kubenswrapper[5107]: I0220 00:19:47.393533 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbhbk\" (UniqueName: \"kubernetes.io/projected/36e00c5a-de32-446f-95ba-2d7a5ef5f7e3-kube-api-access-rbhbk\") pod \"interconnect-operator-78b9bd8798-vdjld\" (UID: \"36e00c5a-de32-446f-95ba-2d7a5ef5f7e3\") " pod="service-telemetry/interconnect-operator-78b9bd8798-vdjld" Feb 20 00:19:47 crc kubenswrapper[5107]: I0220 00:19:47.451595 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-vdjld" Feb 20 00:19:49 crc kubenswrapper[5107]: I0220 00:19:49.236609 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-649d5bcd77-p272x"] Feb 20 00:19:49 crc kubenswrapper[5107]: I0220 00:19:49.243132 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-649d5bcd77-p272x" Feb 20 00:19:49 crc kubenswrapper[5107]: I0220 00:19:49.246683 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-dhkhb\"" Feb 20 00:19:49 crc kubenswrapper[5107]: I0220 00:19:49.246747 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Feb 20 00:19:49 crc kubenswrapper[5107]: I0220 00:19:49.254783 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-649d5bcd77-p272x"] Feb 20 00:19:49 crc kubenswrapper[5107]: I0220 00:19:49.298828 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/800909d3-5987-46c0-b62b-fbdac9ee084d-apiservice-cert\") pod \"elastic-operator-649d5bcd77-p272x\" (UID: \"800909d3-5987-46c0-b62b-fbdac9ee084d\") " pod="service-telemetry/elastic-operator-649d5bcd77-p272x" Feb 20 00:19:49 crc kubenswrapper[5107]: I0220 00:19:49.298899 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n774j\" (UniqueName: \"kubernetes.io/projected/800909d3-5987-46c0-b62b-fbdac9ee084d-kube-api-access-n774j\") pod \"elastic-operator-649d5bcd77-p272x\" (UID: \"800909d3-5987-46c0-b62b-fbdac9ee084d\") " pod="service-telemetry/elastic-operator-649d5bcd77-p272x" Feb 20 00:19:49 crc kubenswrapper[5107]: I0220 00:19:49.298976 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/800909d3-5987-46c0-b62b-fbdac9ee084d-webhook-cert\") pod \"elastic-operator-649d5bcd77-p272x\" (UID: \"800909d3-5987-46c0-b62b-fbdac9ee084d\") " pod="service-telemetry/elastic-operator-649d5bcd77-p272x" Feb 20 00:19:49 crc kubenswrapper[5107]: I0220 00:19:49.400494 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/800909d3-5987-46c0-b62b-fbdac9ee084d-webhook-cert\") pod \"elastic-operator-649d5bcd77-p272x\" (UID: \"800909d3-5987-46c0-b62b-fbdac9ee084d\") " pod="service-telemetry/elastic-operator-649d5bcd77-p272x" Feb 20 00:19:49 crc kubenswrapper[5107]: I0220 00:19:49.400572 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/800909d3-5987-46c0-b62b-fbdac9ee084d-apiservice-cert\") pod \"elastic-operator-649d5bcd77-p272x\" (UID: \"800909d3-5987-46c0-b62b-fbdac9ee084d\") " pod="service-telemetry/elastic-operator-649d5bcd77-p272x" Feb 20 00:19:49 crc kubenswrapper[5107]: I0220 00:19:49.400617 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n774j\" (UniqueName: \"kubernetes.io/projected/800909d3-5987-46c0-b62b-fbdac9ee084d-kube-api-access-n774j\") pod \"elastic-operator-649d5bcd77-p272x\" (UID: \"800909d3-5987-46c0-b62b-fbdac9ee084d\") " pod="service-telemetry/elastic-operator-649d5bcd77-p272x" Feb 20 00:19:49 crc kubenswrapper[5107]: I0220 00:19:49.411239 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/800909d3-5987-46c0-b62b-fbdac9ee084d-apiservice-cert\") pod \"elastic-operator-649d5bcd77-p272x\" (UID: \"800909d3-5987-46c0-b62b-fbdac9ee084d\") " pod="service-telemetry/elastic-operator-649d5bcd77-p272x" Feb 20 00:19:49 crc kubenswrapper[5107]: I0220 00:19:49.414422 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/800909d3-5987-46c0-b62b-fbdac9ee084d-webhook-cert\") pod \"elastic-operator-649d5bcd77-p272x\" (UID: \"800909d3-5987-46c0-b62b-fbdac9ee084d\") " pod="service-telemetry/elastic-operator-649d5bcd77-p272x" Feb 20 00:19:49 crc kubenswrapper[5107]: I0220 00:19:49.437805 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n774j\" (UniqueName: \"kubernetes.io/projected/800909d3-5987-46c0-b62b-fbdac9ee084d-kube-api-access-n774j\") pod \"elastic-operator-649d5bcd77-p272x\" (UID: \"800909d3-5987-46c0-b62b-fbdac9ee084d\") " pod="service-telemetry/elastic-operator-649d5bcd77-p272x" Feb 20 00:19:49 crc kubenswrapper[5107]: I0220 00:19:49.558639 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-649d5bcd77-p272x" Feb 20 00:19:54 crc kubenswrapper[5107]: I0220 00:19:54.115383 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-vdjld"] Feb 20 00:19:54 crc kubenswrapper[5107]: I0220 00:19:54.286414 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-649d5bcd77-p272x"] Feb 20 00:19:54 crc kubenswrapper[5107]: W0220 00:19:54.289169 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod800909d3_5987_46c0_b62b_fbdac9ee084d.slice/crio-04aaea0a5a0d3eb67b7b0707e698c7ed863eb0411b025ed82650c4b40fc24eea WatchSource:0}: Error finding container 04aaea0a5a0d3eb67b7b0707e698c7ed863eb0411b025ed82650c4b40fc24eea: Status 404 returned error can't find the container with id 04aaea0a5a0d3eb67b7b0707e698c7ed863eb0411b025ed82650c4b40fc24eea Feb 20 00:19:54 crc kubenswrapper[5107]: I0220 00:19:54.970202 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-dqzb2" event={"ID":"d8430d39-cf33-43a4-922b-46a1456aecfc","Type":"ContainerStarted","Data":"6926e735c74399a18f04f67f020420aaad181824c66298b06813696968695347"} Feb 20 00:19:54 crc kubenswrapper[5107]: I0220 00:19:54.971613 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c6c7796b5-fjpqg" event={"ID":"0a556d1d-e76d-4c41-aa39-cd3da09d0fc4","Type":"ContainerStarted","Data":"47ef2cc5dd83aa85f4c0eb783b253cdb0e9e70fbc05d90e75a30c28d1e09f7e0"} Feb 20 00:19:54 crc kubenswrapper[5107]: I0220 00:19:54.973083 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-649d5bcd77-p272x" event={"ID":"800909d3-5987-46c0-b62b-fbdac9ee084d","Type":"ContainerStarted","Data":"04aaea0a5a0d3eb67b7b0707e698c7ed863eb0411b025ed82650c4b40fc24eea"} Feb 20 00:19:54 crc kubenswrapper[5107]: I0220 00:19:54.974791 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c6c7796b5-r5vcm" event={"ID":"868305ca-ba59-4b0f-9887-cc5967dd4e1e","Type":"ContainerStarted","Data":"2adb625b66a3db0a63882e5e4bcd9e27f31847e20b31b412de45372bae50b988"} Feb 20 00:19:54 crc kubenswrapper[5107]: I0220 00:19:54.976450 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-md7td" event={"ID":"c088d742-ebca-4f0c-8d38-5eaaa974b512","Type":"ContainerStarted","Data":"7b7c9107916f0860495f696211f01283ec060c1fdf93fe8ce07815a85cc9b2f1"} Feb 20 00:19:54 crc kubenswrapper[5107]: I0220 00:19:54.977959 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-22brv" event={"ID":"aedefad2-67f0-4b49-bb87-80b19cf0faf5","Type":"ContainerStarted","Data":"236d3dc4ebe443514c1eb1a8679470e48878d21842484c4cc8244e28682b085b"} Feb 20 00:19:54 crc kubenswrapper[5107]: I0220 00:19:54.978187 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-85c68dddb-22brv" Feb 20 00:19:54 crc kubenswrapper[5107]: I0220 00:19:54.980072 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-85c68dddb-22brv" Feb 20 00:19:54 crc kubenswrapper[5107]: I0220 00:19:54.982055 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-2dsh6" event={"ID":"5679dc1f-7e3e-4370-998c-1cbb4b0fad69","Type":"ContainerStarted","Data":"0b0fe5f777c3a5854777147d2b4b58b9ae5f6919e438be1df1166dd36543f76f"} Feb 20 00:19:54 crc kubenswrapper[5107]: I0220 00:19:54.982222 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-669c9f96b5-2dsh6" Feb 20 00:19:54 crc kubenswrapper[5107]: I0220 00:19:54.989121 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-vdjld" event={"ID":"36e00c5a-de32-446f-95ba-2d7a5ef5f7e3","Type":"ContainerStarted","Data":"5d5fcbf730459c6c40905ebb5de9145a09101acb3d1910800ab25b9e2407cb26"} Feb 20 00:19:54 crc kubenswrapper[5107]: I0220 00:19:54.990752 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-dqzb2" podStartSLOduration=2.649222785 podStartE2EDuration="14.99073776s" podCreationTimestamp="2026-02-20 00:19:40 +0000 UTC" firstStartedPulling="2026-02-20 00:19:41.433988826 +0000 UTC m=+667.802646402" lastFinishedPulling="2026-02-20 00:19:53.775503811 +0000 UTC m=+680.144161377" observedRunningTime="2026-02-20 00:19:54.986389261 +0000 UTC m=+681.355046827" watchObservedRunningTime="2026-02-20 00:19:54.99073776 +0000 UTC m=+681.359395326" Feb 20 00:19:55 crc kubenswrapper[5107]: I0220 00:19:55.030906 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-85c68dddb-22brv" podStartSLOduration=2.921971444 podStartE2EDuration="15.030887226s" podCreationTimestamp="2026-02-20 00:19:40 +0000 UTC" firstStartedPulling="2026-02-20 00:19:41.723468452 +0000 UTC m=+668.092126058" lastFinishedPulling="2026-02-20 00:19:53.832384274 +0000 UTC m=+680.201041840" observedRunningTime="2026-02-20 00:19:55.027462593 +0000 UTC m=+681.396120159" watchObservedRunningTime="2026-02-20 00:19:55.030887226 +0000 UTC m=+681.399544802" Feb 20 00:19:55 crc kubenswrapper[5107]: I0220 00:19:55.056193 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-669c9f96b5-2dsh6" podStartSLOduration=3.099645457 podStartE2EDuration="15.056172417s" podCreationTimestamp="2026-02-20 00:19:40 +0000 UTC" firstStartedPulling="2026-02-20 00:19:41.811045714 +0000 UTC m=+668.179703280" lastFinishedPulling="2026-02-20 00:19:53.767572674 +0000 UTC m=+680.136230240" observedRunningTime="2026-02-20 00:19:55.046836742 +0000 UTC m=+681.415494328" watchObservedRunningTime="2026-02-20 00:19:55.056172417 +0000 UTC m=+681.424830003" Feb 20 00:19:55 crc kubenswrapper[5107]: I0220 00:19:55.072331 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-md7td" podStartSLOduration=2.4642170549999998 podStartE2EDuration="12.072310528s" podCreationTimestamp="2026-02-20 00:19:43 +0000 UTC" firstStartedPulling="2026-02-20 00:19:44.167894271 +0000 UTC m=+670.536551837" lastFinishedPulling="2026-02-20 00:19:53.775987744 +0000 UTC m=+680.144645310" observedRunningTime="2026-02-20 00:19:55.072119723 +0000 UTC m=+681.440777289" watchObservedRunningTime="2026-02-20 00:19:55.072310528 +0000 UTC m=+681.440968104" Feb 20 00:19:55 crc kubenswrapper[5107]: I0220 00:19:55.103962 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c6c7796b5-fjpqg" podStartSLOduration=2.872919565 podStartE2EDuration="15.103938422s" podCreationTimestamp="2026-02-20 00:19:40 +0000 UTC" firstStartedPulling="2026-02-20 00:19:41.546786237 +0000 UTC m=+667.915443803" lastFinishedPulling="2026-02-20 00:19:53.777805084 +0000 UTC m=+680.146462660" observedRunningTime="2026-02-20 00:19:55.090986608 +0000 UTC m=+681.459644184" watchObservedRunningTime="2026-02-20 00:19:55.103938422 +0000 UTC m=+681.472595978" Feb 20 00:19:55 crc kubenswrapper[5107]: I0220 00:19:55.117524 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c6c7796b5-r5vcm" podStartSLOduration=2.919763913 podStartE2EDuration="15.1174449s" podCreationTimestamp="2026-02-20 00:19:40 +0000 UTC" firstStartedPulling="2026-02-20 00:19:41.546666723 +0000 UTC m=+667.915324289" lastFinishedPulling="2026-02-20 00:19:53.74434771 +0000 UTC m=+680.113005276" observedRunningTime="2026-02-20 00:19:55.110241084 +0000 UTC m=+681.478898650" watchObservedRunningTime="2026-02-20 00:19:55.1174449 +0000 UTC m=+681.486102466" Feb 20 00:20:00 crc kubenswrapper[5107]: I0220 00:20:00.037518 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-zxrwj"] Feb 20 00:20:00 crc kubenswrapper[5107]: I0220 00:20:00.053977 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-zxrwj"] Feb 20 00:20:00 crc kubenswrapper[5107]: I0220 00:20:00.054079 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-8966b78d4-zxrwj" Feb 20 00:20:00 crc kubenswrapper[5107]: I0220 00:20:00.055991 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-zkwhr\"" Feb 20 00:20:00 crc kubenswrapper[5107]: I0220 00:20:00.056211 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Feb 20 00:20:00 crc kubenswrapper[5107]: I0220 00:20:00.056970 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Feb 20 00:20:00 crc kubenswrapper[5107]: I0220 00:20:00.125676 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29525780-fkb67"] Feb 20 00:20:00 crc kubenswrapper[5107]: I0220 00:20:00.138192 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525780-fkb67" Feb 20 00:20:00 crc kubenswrapper[5107]: I0220 00:20:00.140560 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 20 00:20:00 crc kubenswrapper[5107]: I0220 00:20:00.140830 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 20 00:20:00 crc kubenswrapper[5107]: I0220 00:20:00.142366 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-km7dp\"" Feb 20 00:20:00 crc kubenswrapper[5107]: I0220 00:20:00.148057 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525780-fkb67"] Feb 20 00:20:00 crc kubenswrapper[5107]: I0220 00:20:00.182154 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/281d60a7-353e-4833-a849-d020232dc2c8-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-zxrwj\" (UID: \"281d60a7-353e-4833-a849-d020232dc2c8\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-zxrwj" Feb 20 00:20:00 crc kubenswrapper[5107]: I0220 00:20:00.182263 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77nx8\" (UniqueName: \"kubernetes.io/projected/281d60a7-353e-4833-a849-d020232dc2c8-kube-api-access-77nx8\") pod \"cert-manager-cainjector-8966b78d4-zxrwj\" (UID: \"281d60a7-353e-4833-a849-d020232dc2c8\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-zxrwj" Feb 20 00:20:00 crc kubenswrapper[5107]: I0220 00:20:00.283076 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz5qr\" (UniqueName: \"kubernetes.io/projected/811bd17b-84bc-41bb-aaed-ca5fff6a638e-kube-api-access-cz5qr\") pod \"auto-csr-approver-29525780-fkb67\" (UID: \"811bd17b-84bc-41bb-aaed-ca5fff6a638e\") " pod="openshift-infra/auto-csr-approver-29525780-fkb67" Feb 20 00:20:00 crc kubenswrapper[5107]: I0220 00:20:00.283175 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/281d60a7-353e-4833-a849-d020232dc2c8-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-zxrwj\" (UID: \"281d60a7-353e-4833-a849-d020232dc2c8\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-zxrwj" Feb 20 00:20:00 crc kubenswrapper[5107]: I0220 00:20:00.283222 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-77nx8\" (UniqueName: \"kubernetes.io/projected/281d60a7-353e-4833-a849-d020232dc2c8-kube-api-access-77nx8\") pod \"cert-manager-cainjector-8966b78d4-zxrwj\" (UID: \"281d60a7-353e-4833-a849-d020232dc2c8\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-zxrwj" Feb 20 00:20:00 crc kubenswrapper[5107]: I0220 00:20:00.330002 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-77nx8\" (UniqueName: \"kubernetes.io/projected/281d60a7-353e-4833-a849-d020232dc2c8-kube-api-access-77nx8\") pod \"cert-manager-cainjector-8966b78d4-zxrwj\" (UID: \"281d60a7-353e-4833-a849-d020232dc2c8\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-zxrwj" Feb 20 00:20:00 crc kubenswrapper[5107]: I0220 00:20:00.335311 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/281d60a7-353e-4833-a849-d020232dc2c8-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-zxrwj\" (UID: \"281d60a7-353e-4833-a849-d020232dc2c8\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-zxrwj" Feb 20 00:20:00 crc kubenswrapper[5107]: I0220 00:20:00.370497 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-8966b78d4-zxrwj" Feb 20 00:20:00 crc kubenswrapper[5107]: I0220 00:20:00.384811 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cz5qr\" (UniqueName: \"kubernetes.io/projected/811bd17b-84bc-41bb-aaed-ca5fff6a638e-kube-api-access-cz5qr\") pod \"auto-csr-approver-29525780-fkb67\" (UID: \"811bd17b-84bc-41bb-aaed-ca5fff6a638e\") " pod="openshift-infra/auto-csr-approver-29525780-fkb67" Feb 20 00:20:00 crc kubenswrapper[5107]: I0220 00:20:00.426840 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz5qr\" (UniqueName: \"kubernetes.io/projected/811bd17b-84bc-41bb-aaed-ca5fff6a638e-kube-api-access-cz5qr\") pod \"auto-csr-approver-29525780-fkb67\" (UID: \"811bd17b-84bc-41bb-aaed-ca5fff6a638e\") " pod="openshift-infra/auto-csr-approver-29525780-fkb67" Feb 20 00:20:00 crc kubenswrapper[5107]: I0220 00:20:00.463066 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525780-fkb67" Feb 20 00:20:01 crc kubenswrapper[5107]: I0220 00:20:01.264624 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-pnlqb"] Feb 20 00:20:01 crc kubenswrapper[5107]: I0220 00:20:01.323460 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-pnlqb"] Feb 20 00:20:01 crc kubenswrapper[5107]: I0220 00:20:01.323594 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-597b96b99b-pnlqb" Feb 20 00:20:01 crc kubenswrapper[5107]: I0220 00:20:01.328405 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-h2vng\"" Feb 20 00:20:01 crc kubenswrapper[5107]: I0220 00:20:01.499820 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e2235000-3217-4c46-8d84-a0c0e737d469-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-pnlqb\" (UID: \"e2235000-3217-4c46-8d84-a0c0e737d469\") " pod="cert-manager/cert-manager-webhook-597b96b99b-pnlqb" Feb 20 00:20:01 crc kubenswrapper[5107]: I0220 00:20:01.499893 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtftl\" (UniqueName: \"kubernetes.io/projected/e2235000-3217-4c46-8d84-a0c0e737d469-kube-api-access-rtftl\") pod \"cert-manager-webhook-597b96b99b-pnlqb\" (UID: \"e2235000-3217-4c46-8d84-a0c0e737d469\") " pod="cert-manager/cert-manager-webhook-597b96b99b-pnlqb" Feb 20 00:20:01 crc kubenswrapper[5107]: I0220 00:20:01.600691 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e2235000-3217-4c46-8d84-a0c0e737d469-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-pnlqb\" (UID: \"e2235000-3217-4c46-8d84-a0c0e737d469\") " pod="cert-manager/cert-manager-webhook-597b96b99b-pnlqb" Feb 20 00:20:01 crc kubenswrapper[5107]: I0220 00:20:01.600790 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rtftl\" (UniqueName: \"kubernetes.io/projected/e2235000-3217-4c46-8d84-a0c0e737d469-kube-api-access-rtftl\") pod \"cert-manager-webhook-597b96b99b-pnlqb\" (UID: \"e2235000-3217-4c46-8d84-a0c0e737d469\") " pod="cert-manager/cert-manager-webhook-597b96b99b-pnlqb" Feb 20 00:20:01 crc kubenswrapper[5107]: I0220 00:20:01.621574 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e2235000-3217-4c46-8d84-a0c0e737d469-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-pnlqb\" (UID: \"e2235000-3217-4c46-8d84-a0c0e737d469\") " pod="cert-manager/cert-manager-webhook-597b96b99b-pnlqb" Feb 20 00:20:01 crc kubenswrapper[5107]: I0220 00:20:01.632329 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtftl\" (UniqueName: \"kubernetes.io/projected/e2235000-3217-4c46-8d84-a0c0e737d469-kube-api-access-rtftl\") pod \"cert-manager-webhook-597b96b99b-pnlqb\" (UID: \"e2235000-3217-4c46-8d84-a0c0e737d469\") " pod="cert-manager/cert-manager-webhook-597b96b99b-pnlqb" Feb 20 00:20:01 crc kubenswrapper[5107]: I0220 00:20:01.639215 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-597b96b99b-pnlqb" Feb 20 00:20:02 crc kubenswrapper[5107]: I0220 00:20:02.824000 5107 patch_prober.go:28] interesting pod/machine-config-daemon-5bqkx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:20:02 crc kubenswrapper[5107]: I0220 00:20:02.824063 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:20:03 crc kubenswrapper[5107]: I0220 00:20:03.970808 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525780-fkb67"] Feb 20 00:20:03 crc kubenswrapper[5107]: W0220 00:20:03.972967 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod811bd17b_84bc_41bb_aaed_ca5fff6a638e.slice/crio-b717e35ab25bcd68c585583ba7aa9a09ece94a8129221f87ba8c2cc89030a14f WatchSource:0}: Error finding container b717e35ab25bcd68c585583ba7aa9a09ece94a8129221f87ba8c2cc89030a14f: Status 404 returned error can't find the container with id b717e35ab25bcd68c585583ba7aa9a09ece94a8129221f87ba8c2cc89030a14f Feb 20 00:20:03 crc kubenswrapper[5107]: I0220 00:20:03.991792 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-pnlqb"] Feb 20 00:20:04 crc kubenswrapper[5107]: I0220 00:20:04.059568 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-597b96b99b-pnlqb" event={"ID":"e2235000-3217-4c46-8d84-a0c0e737d469","Type":"ContainerStarted","Data":"bf9d949fadd04758c9324f80e9ddb3172fc7ab651c3295c9177c7cebac119026"} Feb 20 00:20:04 crc kubenswrapper[5107]: I0220 00:20:04.061537 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-649d5bcd77-p272x" event={"ID":"800909d3-5987-46c0-b62b-fbdac9ee084d","Type":"ContainerStarted","Data":"9635230aed5062c81d84a87f834ae58a73efe8a667f9970632f2131d64dafbc5"} Feb 20 00:20:04 crc kubenswrapper[5107]: I0220 00:20:04.063495 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525780-fkb67" event={"ID":"811bd17b-84bc-41bb-aaed-ca5fff6a638e","Type":"ContainerStarted","Data":"b717e35ab25bcd68c585583ba7aa9a09ece94a8129221f87ba8c2cc89030a14f"} Feb 20 00:20:04 crc kubenswrapper[5107]: I0220 00:20:04.065353 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-vdjld" event={"ID":"36e00c5a-de32-446f-95ba-2d7a5ef5f7e3","Type":"ContainerStarted","Data":"25d4401cf6039438f540d91a9b9bf4ca8a3ec36299cba8ea463b33570e9c5c37"} Feb 20 00:20:04 crc kubenswrapper[5107]: I0220 00:20:04.080588 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-649d5bcd77-p272x" podStartSLOduration=6.035522153 podStartE2EDuration="15.080569808s" podCreationTimestamp="2026-02-20 00:19:49 +0000 UTC" firstStartedPulling="2026-02-20 00:19:54.291870333 +0000 UTC m=+680.660527899" lastFinishedPulling="2026-02-20 00:20:03.336917988 +0000 UTC m=+689.705575554" observedRunningTime="2026-02-20 00:20:04.079032136 +0000 UTC m=+690.447689702" watchObservedRunningTime="2026-02-20 00:20:04.080569808 +0000 UTC m=+690.449227374" Feb 20 00:20:04 crc kubenswrapper[5107]: I0220 00:20:04.159939 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-78b9bd8798-vdjld" podStartSLOduration=7.8295237669999995 podStartE2EDuration="17.159919555s" podCreationTimestamp="2026-02-20 00:19:47 +0000 UTC" firstStartedPulling="2026-02-20 00:19:54.127555296 +0000 UTC m=+680.496212862" lastFinishedPulling="2026-02-20 00:20:03.457951084 +0000 UTC m=+689.826608650" observedRunningTime="2026-02-20 00:20:04.106052634 +0000 UTC m=+690.474710200" watchObservedRunningTime="2026-02-20 00:20:04.159919555 +0000 UTC m=+690.528577121" Feb 20 00:20:04 crc kubenswrapper[5107]: W0220 00:20:04.165605 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod281d60a7_353e_4833_a849_d020232dc2c8.slice/crio-f7f570e6a87eaa67da8d65f7c47764ee1219c5b755ff2862caa5e4fa3907562f WatchSource:0}: Error finding container f7f570e6a87eaa67da8d65f7c47764ee1219c5b755ff2862caa5e4fa3907562f: Status 404 returned error can't find the container with id f7f570e6a87eaa67da8d65f7c47764ee1219c5b755ff2862caa5e4fa3907562f Feb 20 00:20:04 crc kubenswrapper[5107]: I0220 00:20:04.167773 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-zxrwj"] Feb 20 00:20:05 crc kubenswrapper[5107]: I0220 00:20:05.072066 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-8966b78d4-zxrwj" event={"ID":"281d60a7-353e-4833-a849-d020232dc2c8","Type":"ContainerStarted","Data":"f7f570e6a87eaa67da8d65f7c47764ee1219c5b755ff2862caa5e4fa3907562f"} Feb 20 00:20:06 crc kubenswrapper[5107]: I0220 00:20:06.005278 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-669c9f96b5-2dsh6" Feb 20 00:20:06 crc kubenswrapper[5107]: I0220 00:20:06.083003 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525780-fkb67" event={"ID":"811bd17b-84bc-41bb-aaed-ca5fff6a638e","Type":"ContainerStarted","Data":"741c24ae042c7236eec96bb6a2f2fa47bafe41a88ead0a05ee90586b8eda2654"} Feb 20 00:20:06 crc kubenswrapper[5107]: I0220 00:20:06.098551 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29525780-fkb67" podStartSLOduration=4.742621979 podStartE2EDuration="6.09853044s" podCreationTimestamp="2026-02-20 00:20:00 +0000 UTC" firstStartedPulling="2026-02-20 00:20:03.97810227 +0000 UTC m=+690.346759836" lastFinishedPulling="2026-02-20 00:20:05.334010731 +0000 UTC m=+691.702668297" observedRunningTime="2026-02-20 00:20:06.097756419 +0000 UTC m=+692.466413985" watchObservedRunningTime="2026-02-20 00:20:06.09853044 +0000 UTC m=+692.467188006" Feb 20 00:20:07 crc kubenswrapper[5107]: I0220 00:20:07.103783 5107 generic.go:358] "Generic (PLEG): container finished" podID="811bd17b-84bc-41bb-aaed-ca5fff6a638e" containerID="741c24ae042c7236eec96bb6a2f2fa47bafe41a88ead0a05ee90586b8eda2654" exitCode=0 Feb 20 00:20:07 crc kubenswrapper[5107]: I0220 00:20:07.103944 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525780-fkb67" event={"ID":"811bd17b-84bc-41bb-aaed-ca5fff6a638e","Type":"ContainerDied","Data":"741c24ae042c7236eec96bb6a2f2fa47bafe41a88ead0a05ee90586b8eda2654"} Feb 20 00:20:08 crc kubenswrapper[5107]: I0220 00:20:08.424307 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525780-fkb67" Feb 20 00:20:08 crc kubenswrapper[5107]: I0220 00:20:08.498498 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cz5qr\" (UniqueName: \"kubernetes.io/projected/811bd17b-84bc-41bb-aaed-ca5fff6a638e-kube-api-access-cz5qr\") pod \"811bd17b-84bc-41bb-aaed-ca5fff6a638e\" (UID: \"811bd17b-84bc-41bb-aaed-ca5fff6a638e\") " Feb 20 00:20:08 crc kubenswrapper[5107]: I0220 00:20:08.509680 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/811bd17b-84bc-41bb-aaed-ca5fff6a638e-kube-api-access-cz5qr" (OuterVolumeSpecName: "kube-api-access-cz5qr") pod "811bd17b-84bc-41bb-aaed-ca5fff6a638e" (UID: "811bd17b-84bc-41bb-aaed-ca5fff6a638e"). InnerVolumeSpecName "kube-api-access-cz5qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:20:08 crc kubenswrapper[5107]: I0220 00:20:08.600035 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cz5qr\" (UniqueName: \"kubernetes.io/projected/811bd17b-84bc-41bb-aaed-ca5fff6a638e-kube-api-access-cz5qr\") on node \"crc\" DevicePath \"\"" Feb 20 00:20:09 crc kubenswrapper[5107]: I0220 00:20:09.124921 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525780-fkb67" event={"ID":"811bd17b-84bc-41bb-aaed-ca5fff6a638e","Type":"ContainerDied","Data":"b717e35ab25bcd68c585583ba7aa9a09ece94a8129221f87ba8c2cc89030a14f"} Feb 20 00:20:09 crc kubenswrapper[5107]: I0220 00:20:09.125187 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b717e35ab25bcd68c585583ba7aa9a09ece94a8129221f87ba8c2cc89030a14f" Feb 20 00:20:09 crc kubenswrapper[5107]: I0220 00:20:09.124952 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525780-fkb67" Feb 20 00:20:11 crc kubenswrapper[5107]: I0220 00:20:11.144366 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-597b96b99b-pnlqb" event={"ID":"e2235000-3217-4c46-8d84-a0c0e737d469","Type":"ContainerStarted","Data":"b591e7532bd03f5a59637dd45eb1b7c59160bcb22eae470cf6ccf2e6a1a6434b"} Feb 20 00:20:11 crc kubenswrapper[5107]: I0220 00:20:11.144692 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-597b96b99b-pnlqb" Feb 20 00:20:11 crc kubenswrapper[5107]: I0220 00:20:11.145920 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-8966b78d4-zxrwj" event={"ID":"281d60a7-353e-4833-a849-d020232dc2c8","Type":"ContainerStarted","Data":"bb30e9044e02e39b9326f7d0ec253b19a7c436e33328516ae035c97c6e5f15cd"} Feb 20 00:20:11 crc kubenswrapper[5107]: I0220 00:20:11.161962 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-597b96b99b-pnlqb" podStartSLOduration=3.660858506 podStartE2EDuration="10.161947395s" podCreationTimestamp="2026-02-20 00:20:01 +0000 UTC" firstStartedPulling="2026-02-20 00:20:04.01803755 +0000 UTC m=+690.386695116" lastFinishedPulling="2026-02-20 00:20:10.519126429 +0000 UTC m=+696.887784005" observedRunningTime="2026-02-20 00:20:11.15813588 +0000 UTC m=+697.526793436" watchObservedRunningTime="2026-02-20 00:20:11.161947395 +0000 UTC m=+697.530604961" Feb 20 00:20:11 crc kubenswrapper[5107]: I0220 00:20:11.175298 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-8966b78d4-zxrwj" podStartSLOduration=4.81445003 podStartE2EDuration="11.175273338s" podCreationTimestamp="2026-02-20 00:20:00 +0000 UTC" firstStartedPulling="2026-02-20 00:20:04.167713478 +0000 UTC m=+690.536371034" lastFinishedPulling="2026-02-20 00:20:10.528536776 +0000 UTC m=+696.897194342" observedRunningTime="2026-02-20 00:20:11.173736567 +0000 UTC m=+697.542394133" watchObservedRunningTime="2026-02-20 00:20:11.175273338 +0000 UTC m=+697.543930914" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.303826 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.305005 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="811bd17b-84bc-41bb-aaed-ca5fff6a638e" containerName="oc" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.305021 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="811bd17b-84bc-41bb-aaed-ca5fff6a638e" containerName="oc" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.305139 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="811bd17b-84bc-41bb-aaed-ca5fff6a638e" containerName="oc" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.329307 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.338288 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.338402 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.338593 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.338612 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-6h4l5\"" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.338735 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.338808 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.339056 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.339242 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.339458 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.345173 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.394191 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/3843b818-6c4f-4935-a475-6fac500764f9-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.394252 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/3843b818-6c4f-4935-a475-6fac500764f9-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.394288 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/3843b818-6c4f-4935-a475-6fac500764f9-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.394311 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/3843b818-6c4f-4935-a475-6fac500764f9-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.394396 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/3843b818-6c4f-4935-a475-6fac500764f9-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.394428 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/3843b818-6c4f-4935-a475-6fac500764f9-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.394458 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/3843b818-6c4f-4935-a475-6fac500764f9-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.394538 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/3843b818-6c4f-4935-a475-6fac500764f9-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.394579 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/3843b818-6c4f-4935-a475-6fac500764f9-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.394606 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/3843b818-6c4f-4935-a475-6fac500764f9-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.394622 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/3843b818-6c4f-4935-a475-6fac500764f9-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.394690 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/3843b818-6c4f-4935-a475-6fac500764f9-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.394740 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/3843b818-6c4f-4935-a475-6fac500764f9-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.394774 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/3843b818-6c4f-4935-a475-6fac500764f9-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.394793 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/3843b818-6c4f-4935-a475-6fac500764f9-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.496264 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/3843b818-6c4f-4935-a475-6fac500764f9-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.496336 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/3843b818-6c4f-4935-a475-6fac500764f9-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.496358 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/3843b818-6c4f-4935-a475-6fac500764f9-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.496380 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/3843b818-6c4f-4935-a475-6fac500764f9-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.496398 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/3843b818-6c4f-4935-a475-6fac500764f9-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.496419 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/3843b818-6c4f-4935-a475-6fac500764f9-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.496436 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/3843b818-6c4f-4935-a475-6fac500764f9-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.496456 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/3843b818-6c4f-4935-a475-6fac500764f9-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.496472 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/3843b818-6c4f-4935-a475-6fac500764f9-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.496502 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/3843b818-6c4f-4935-a475-6fac500764f9-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.496520 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/3843b818-6c4f-4935-a475-6fac500764f9-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.496544 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/3843b818-6c4f-4935-a475-6fac500764f9-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.496561 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/3843b818-6c4f-4935-a475-6fac500764f9-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.496600 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/3843b818-6c4f-4935-a475-6fac500764f9-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.496627 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/3843b818-6c4f-4935-a475-6fac500764f9-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.498476 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/3843b818-6c4f-4935-a475-6fac500764f9-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.498653 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/3843b818-6c4f-4935-a475-6fac500764f9-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.498889 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/3843b818-6c4f-4935-a475-6fac500764f9-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.498966 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/3843b818-6c4f-4935-a475-6fac500764f9-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.499215 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/3843b818-6c4f-4935-a475-6fac500764f9-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.499538 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/3843b818-6c4f-4935-a475-6fac500764f9-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.499870 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/3843b818-6c4f-4935-a475-6fac500764f9-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.502999 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/3843b818-6c4f-4935-a475-6fac500764f9-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.503432 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/3843b818-6c4f-4935-a475-6fac500764f9-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.504816 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/3843b818-6c4f-4935-a475-6fac500764f9-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.505363 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/3843b818-6c4f-4935-a475-6fac500764f9-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.505386 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/3843b818-6c4f-4935-a475-6fac500764f9-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.505573 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/3843b818-6c4f-4935-a475-6fac500764f9-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.506009 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/3843b818-6c4f-4935-a475-6fac500764f9-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.517661 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/3843b818-6c4f-4935-a475-6fac500764f9-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"3843b818-6c4f-4935-a475-6fac500764f9\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:15 crc kubenswrapper[5107]: I0220 00:20:15.660514 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:16 crc kubenswrapper[5107]: I0220 00:20:16.115804 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 20 00:20:16 crc kubenswrapper[5107]: W0220 00:20:16.120531 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3843b818_6c4f_4935_a475_6fac500764f9.slice/crio-31c417f67eee2beeb75da12d48090c13c3e8c2dac97151f1a8e49953cc2aa8a1 WatchSource:0}: Error finding container 31c417f67eee2beeb75da12d48090c13c3e8c2dac97151f1a8e49953cc2aa8a1: Status 404 returned error can't find the container with id 31c417f67eee2beeb75da12d48090c13c3e8c2dac97151f1a8e49953cc2aa8a1 Feb 20 00:20:16 crc kubenswrapper[5107]: I0220 00:20:16.199734 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"3843b818-6c4f-4935-a475-6fac500764f9","Type":"ContainerStarted","Data":"31c417f67eee2beeb75da12d48090c13c3e8c2dac97151f1a8e49953cc2aa8a1"} Feb 20 00:20:17 crc kubenswrapper[5107]: I0220 00:20:17.033590 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-759f64656b-lmsmj"] Feb 20 00:20:17 crc kubenswrapper[5107]: I0220 00:20:17.209664 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-lmsmj"] Feb 20 00:20:17 crc kubenswrapper[5107]: I0220 00:20:17.209763 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-597b96b99b-pnlqb" Feb 20 00:20:17 crc kubenswrapper[5107]: I0220 00:20:17.209849 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-lmsmj" Feb 20 00:20:17 crc kubenswrapper[5107]: I0220 00:20:17.213599 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-4jdvq\"" Feb 20 00:20:17 crc kubenswrapper[5107]: I0220 00:20:17.323227 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxwt8\" (UniqueName: \"kubernetes.io/projected/b0faf488-8462-43cb-8579-20a0407bdfd9-kube-api-access-cxwt8\") pod \"cert-manager-759f64656b-lmsmj\" (UID: \"b0faf488-8462-43cb-8579-20a0407bdfd9\") " pod="cert-manager/cert-manager-759f64656b-lmsmj" Feb 20 00:20:17 crc kubenswrapper[5107]: I0220 00:20:17.323666 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b0faf488-8462-43cb-8579-20a0407bdfd9-bound-sa-token\") pod \"cert-manager-759f64656b-lmsmj\" (UID: \"b0faf488-8462-43cb-8579-20a0407bdfd9\") " pod="cert-manager/cert-manager-759f64656b-lmsmj" Feb 20 00:20:17 crc kubenswrapper[5107]: I0220 00:20:17.428828 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b0faf488-8462-43cb-8579-20a0407bdfd9-bound-sa-token\") pod \"cert-manager-759f64656b-lmsmj\" (UID: \"b0faf488-8462-43cb-8579-20a0407bdfd9\") " pod="cert-manager/cert-manager-759f64656b-lmsmj" Feb 20 00:20:17 crc kubenswrapper[5107]: I0220 00:20:17.428885 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cxwt8\" (UniqueName: \"kubernetes.io/projected/b0faf488-8462-43cb-8579-20a0407bdfd9-kube-api-access-cxwt8\") pod \"cert-manager-759f64656b-lmsmj\" (UID: \"b0faf488-8462-43cb-8579-20a0407bdfd9\") " pod="cert-manager/cert-manager-759f64656b-lmsmj" Feb 20 00:20:17 crc kubenswrapper[5107]: I0220 00:20:17.448413 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b0faf488-8462-43cb-8579-20a0407bdfd9-bound-sa-token\") pod \"cert-manager-759f64656b-lmsmj\" (UID: \"b0faf488-8462-43cb-8579-20a0407bdfd9\") " pod="cert-manager/cert-manager-759f64656b-lmsmj" Feb 20 00:20:17 crc kubenswrapper[5107]: I0220 00:20:17.458291 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxwt8\" (UniqueName: \"kubernetes.io/projected/b0faf488-8462-43cb-8579-20a0407bdfd9-kube-api-access-cxwt8\") pod \"cert-manager-759f64656b-lmsmj\" (UID: \"b0faf488-8462-43cb-8579-20a0407bdfd9\") " pod="cert-manager/cert-manager-759f64656b-lmsmj" Feb 20 00:20:17 crc kubenswrapper[5107]: I0220 00:20:17.539700 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-lmsmj" Feb 20 00:20:18 crc kubenswrapper[5107]: I0220 00:20:18.025405 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-lmsmj"] Feb 20 00:20:18 crc kubenswrapper[5107]: I0220 00:20:18.216391 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-lmsmj" event={"ID":"b0faf488-8462-43cb-8579-20a0407bdfd9","Type":"ContainerStarted","Data":"1fb57ca61f2ca1f15cbe83a7597cae67d37a3fbe449729418ebebe65d4886bed"} Feb 20 00:20:22 crc kubenswrapper[5107]: I0220 00:20:22.247110 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-lmsmj" event={"ID":"b0faf488-8462-43cb-8579-20a0407bdfd9","Type":"ContainerStarted","Data":"611ffa2a38315fd5afe84be6cb35ab5f6213e5451ee4875af5e1891c81a52cf3"} Feb 20 00:20:22 crc kubenswrapper[5107]: I0220 00:20:22.266747 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-759f64656b-lmsmj" podStartSLOduration=5.266726203 podStartE2EDuration="5.266726203s" podCreationTimestamp="2026-02-20 00:20:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:20:22.265818718 +0000 UTC m=+708.634476304" watchObservedRunningTime="2026-02-20 00:20:22.266726203 +0000 UTC m=+708.635383769" Feb 20 00:20:30 crc kubenswrapper[5107]: I0220 00:20:30.300775 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"3843b818-6c4f-4935-a475-6fac500764f9","Type":"ContainerStarted","Data":"90430c2b71c3025ed81ef94e5e69ce426f1852cce0e08b2ac3bb7c485987c854"} Feb 20 00:20:30 crc kubenswrapper[5107]: I0220 00:20:30.463365 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 20 00:20:30 crc kubenswrapper[5107]: I0220 00:20:30.504209 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 20 00:20:32 crc kubenswrapper[5107]: I0220 00:20:32.318271 5107 generic.go:358] "Generic (PLEG): container finished" podID="3843b818-6c4f-4935-a475-6fac500764f9" containerID="90430c2b71c3025ed81ef94e5e69ce426f1852cce0e08b2ac3bb7c485987c854" exitCode=0 Feb 20 00:20:32 crc kubenswrapper[5107]: I0220 00:20:32.318324 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"3843b818-6c4f-4935-a475-6fac500764f9","Type":"ContainerDied","Data":"90430c2b71c3025ed81ef94e5e69ce426f1852cce0e08b2ac3bb7c485987c854"} Feb 20 00:20:32 crc kubenswrapper[5107]: I0220 00:20:32.824545 5107 patch_prober.go:28] interesting pod/machine-config-daemon-5bqkx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:20:32 crc kubenswrapper[5107]: I0220 00:20:32.824874 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:20:32 crc kubenswrapper[5107]: I0220 00:20:32.824923 5107 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" Feb 20 00:20:32 crc kubenswrapper[5107]: I0220 00:20:32.825578 5107 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bb20a5be1ae88e4e4d0571e4849fdfa6beddb51816cd34f7807146b41b9e36ee"} pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 20 00:20:32 crc kubenswrapper[5107]: I0220 00:20:32.825655 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" containerName="machine-config-daemon" containerID="cri-o://bb20a5be1ae88e4e4d0571e4849fdfa6beddb51816cd34f7807146b41b9e36ee" gracePeriod=600 Feb 20 00:20:33 crc kubenswrapper[5107]: I0220 00:20:33.325318 5107 generic.go:358] "Generic (PLEG): container finished" podID="3843b818-6c4f-4935-a475-6fac500764f9" containerID="6ae354710b09881292ec4f85491b350cdfb9d2e23cb270ba690412eacef2091b" exitCode=0 Feb 20 00:20:33 crc kubenswrapper[5107]: I0220 00:20:33.325357 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"3843b818-6c4f-4935-a475-6fac500764f9","Type":"ContainerDied","Data":"6ae354710b09881292ec4f85491b350cdfb9d2e23cb270ba690412eacef2091b"} Feb 20 00:20:33 crc kubenswrapper[5107]: I0220 00:20:33.327754 5107 generic.go:358] "Generic (PLEG): container finished" podID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" containerID="bb20a5be1ae88e4e4d0571e4849fdfa6beddb51816cd34f7807146b41b9e36ee" exitCode=0 Feb 20 00:20:33 crc kubenswrapper[5107]: I0220 00:20:33.327778 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" event={"ID":"2a8cc693-438e-4d3b-8865-7d3907f9dc78","Type":"ContainerDied","Data":"bb20a5be1ae88e4e4d0571e4849fdfa6beddb51816cd34f7807146b41b9e36ee"} Feb 20 00:20:33 crc kubenswrapper[5107]: I0220 00:20:33.327813 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" event={"ID":"2a8cc693-438e-4d3b-8865-7d3907f9dc78","Type":"ContainerStarted","Data":"4236a7d7e51c71204dad21d268dc89276f97ace5da6c6fcbe95fa8e36b47948f"} Feb 20 00:20:33 crc kubenswrapper[5107]: I0220 00:20:33.327833 5107 scope.go:117] "RemoveContainer" containerID="b9df3b3da41a36f1c87d440f95abb0b94cf412d0091dd42441acad625411180c" Feb 20 00:20:34 crc kubenswrapper[5107]: I0220 00:20:34.339398 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"3843b818-6c4f-4935-a475-6fac500764f9","Type":"ContainerStarted","Data":"c5c0497cbb09ec680c0bddc15734050e0c83f1fcf5c9e83c28028835aa4db637"} Feb 20 00:20:34 crc kubenswrapper[5107]: I0220 00:20:34.340020 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:34 crc kubenswrapper[5107]: I0220 00:20:34.371286 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=6.433798842 podStartE2EDuration="19.371267533s" podCreationTimestamp="2026-02-20 00:20:15 +0000 UTC" firstStartedPulling="2026-02-20 00:20:16.12613373 +0000 UTC m=+702.494791306" lastFinishedPulling="2026-02-20 00:20:29.063602431 +0000 UTC m=+715.432259997" observedRunningTime="2026-02-20 00:20:34.368494585 +0000 UTC m=+720.737152151" watchObservedRunningTime="2026-02-20 00:20:34.371267533 +0000 UTC m=+720.739925099" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.160695 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.172113 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.172370 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.174952 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-global-ca\"" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.175208 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-ca\"" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.175764 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-56v9d\"" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.176502 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-sys-config\"" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.291561 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/4fd7093c-854b-4f5a-affc-21412edec91d-builder-dockercfg-56v9d-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.291615 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4fd7093c-854b-4f5a-affc-21412edec91d-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.291813 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/4fd7093c-854b-4f5a-affc-21412edec91d-builder-dockercfg-56v9d-push\") pod \"service-telemetry-operator-1-build\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.291887 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4fd7093c-854b-4f5a-affc-21412edec91d-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.291930 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4fd7093c-854b-4f5a-affc-21412edec91d-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.291996 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4fd7093c-854b-4f5a-affc-21412edec91d-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.292079 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4fd7093c-854b-4f5a-affc-21412edec91d-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.292121 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4fd7093c-854b-4f5a-affc-21412edec91d-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.292178 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4fd7093c-854b-4f5a-affc-21412edec91d-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.292239 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tcvk\" (UniqueName: \"kubernetes.io/projected/4fd7093c-854b-4f5a-affc-21412edec91d-kube-api-access-9tcvk\") pod \"service-telemetry-operator-1-build\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.292266 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4fd7093c-854b-4f5a-affc-21412edec91d-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.292296 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4fd7093c-854b-4f5a-affc-21412edec91d-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.393296 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4fd7093c-854b-4f5a-affc-21412edec91d-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.394056 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9tcvk\" (UniqueName: \"kubernetes.io/projected/4fd7093c-854b-4f5a-affc-21412edec91d-kube-api-access-9tcvk\") pod \"service-telemetry-operator-1-build\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.394185 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4fd7093c-854b-4f5a-affc-21412edec91d-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.394300 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4fd7093c-854b-4f5a-affc-21412edec91d-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.394363 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4fd7093c-854b-4f5a-affc-21412edec91d-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.394733 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4fd7093c-854b-4f5a-affc-21412edec91d-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.394688 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4fd7093c-854b-4f5a-affc-21412edec91d-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.394910 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/4fd7093c-854b-4f5a-affc-21412edec91d-builder-dockercfg-56v9d-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.394994 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4fd7093c-854b-4f5a-affc-21412edec91d-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.395082 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/4fd7093c-854b-4f5a-affc-21412edec91d-builder-dockercfg-56v9d-push\") pod \"service-telemetry-operator-1-build\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.395251 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4fd7093c-854b-4f5a-affc-21412edec91d-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.395845 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4fd7093c-854b-4f5a-affc-21412edec91d-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.395980 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4fd7093c-854b-4f5a-affc-21412edec91d-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.396099 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4fd7093c-854b-4f5a-affc-21412edec91d-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.396256 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4fd7093c-854b-4f5a-affc-21412edec91d-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.396116 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4fd7093c-854b-4f5a-affc-21412edec91d-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.396464 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4fd7093c-854b-4f5a-affc-21412edec91d-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.395325 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4fd7093c-854b-4f5a-affc-21412edec91d-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.395638 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4fd7093c-854b-4f5a-affc-21412edec91d-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.396256 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4fd7093c-854b-4f5a-affc-21412edec91d-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.396941 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4fd7093c-854b-4f5a-affc-21412edec91d-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.401610 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/4fd7093c-854b-4f5a-affc-21412edec91d-builder-dockercfg-56v9d-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.415414 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tcvk\" (UniqueName: \"kubernetes.io/projected/4fd7093c-854b-4f5a-affc-21412edec91d-kube-api-access-9tcvk\") pod \"service-telemetry-operator-1-build\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.416883 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/4fd7093c-854b-4f5a-affc-21412edec91d-builder-dockercfg-56v9d-push\") pod \"service-telemetry-operator-1-build\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.486799 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Feb 20 00:20:35 crc kubenswrapper[5107]: I0220 00:20:35.929638 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 20 00:20:36 crc kubenswrapper[5107]: I0220 00:20:36.354018 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"4fd7093c-854b-4f5a-affc-21412edec91d","Type":"ContainerStarted","Data":"3d382ead0b89e8dc9ae7a53792ce986f24730d1d42ea527818f6adb49e39e624"} Feb 20 00:20:41 crc kubenswrapper[5107]: I0220 00:20:41.387193 5107 generic.go:358] "Generic (PLEG): container finished" podID="4fd7093c-854b-4f5a-affc-21412edec91d" containerID="6f705db1d3d119fa381f57350c8d65e0e07dfa61b1117dc620b8dbb8177742a3" exitCode=0 Feb 20 00:20:41 crc kubenswrapper[5107]: I0220 00:20:41.387318 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"4fd7093c-854b-4f5a-affc-21412edec91d","Type":"ContainerDied","Data":"6f705db1d3d119fa381f57350c8d65e0e07dfa61b1117dc620b8dbb8177742a3"} Feb 20 00:20:42 crc kubenswrapper[5107]: I0220 00:20:42.408319 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"4fd7093c-854b-4f5a-affc-21412edec91d","Type":"ContainerStarted","Data":"4326d4cf44e3e9459e7f9595d0bdf90fa7e58636beea86d375f8b328d046b446"} Feb 20 00:20:42 crc kubenswrapper[5107]: I0220 00:20:42.441659 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-1-build" podStartSLOduration=2.833361551 podStartE2EDuration="7.441639236s" podCreationTimestamp="2026-02-20 00:20:35 +0000 UTC" firstStartedPulling="2026-02-20 00:20:35.933498702 +0000 UTC m=+722.302156268" lastFinishedPulling="2026-02-20 00:20:40.541776387 +0000 UTC m=+726.910433953" observedRunningTime="2026-02-20 00:20:42.434649219 +0000 UTC m=+728.803306815" watchObservedRunningTime="2026-02-20 00:20:42.441639236 +0000 UTC m=+728.810296812" Feb 20 00:20:45 crc kubenswrapper[5107]: I0220 00:20:45.360565 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 20 00:20:45 crc kubenswrapper[5107]: I0220 00:20:45.361083 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-1-build" podUID="4fd7093c-854b-4f5a-affc-21412edec91d" containerName="docker-build" containerID="cri-o://4326d4cf44e3e9459e7f9595d0bdf90fa7e58636beea86d375f8b328d046b446" gracePeriod=30 Feb 20 00:20:45 crc kubenswrapper[5107]: I0220 00:20:45.458385 5107 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="3843b818-6c4f-4935-a475-6fac500764f9" containerName="elasticsearch" probeResult="failure" output=< Feb 20 00:20:45 crc kubenswrapper[5107]: {"timestamp": "2026-02-20T00:20:45+00:00", "message": "readiness probe failed", "curl_rc": "7"} Feb 20 00:20:45 crc kubenswrapper[5107]: > Feb 20 00:20:47 crc kubenswrapper[5107]: I0220 00:20:47.003235 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Feb 20 00:20:47 crc kubenswrapper[5107]: I0220 00:20:47.876663 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Feb 20 00:20:47 crc kubenswrapper[5107]: I0220 00:20:47.877508 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Feb 20 00:20:47 crc kubenswrapper[5107]: I0220 00:20:47.881417 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-sys-config\"" Feb 20 00:20:47 crc kubenswrapper[5107]: I0220 00:20:47.881459 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-global-ca\"" Feb 20 00:20:47 crc kubenswrapper[5107]: I0220 00:20:47.881434 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-ca\"" Feb 20 00:20:47 crc kubenswrapper[5107]: I0220 00:20:47.983881 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dc31182a-5394-4703-b119-ee5a9742f990-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 20 00:20:47 crc kubenswrapper[5107]: I0220 00:20:47.984039 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dc31182a-5394-4703-b119-ee5a9742f990-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 20 00:20:47 crc kubenswrapper[5107]: I0220 00:20:47.984100 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dc31182a-5394-4703-b119-ee5a9742f990-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 20 00:20:47 crc kubenswrapper[5107]: I0220 00:20:47.984213 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dc31182a-5394-4703-b119-ee5a9742f990-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 20 00:20:47 crc kubenswrapper[5107]: I0220 00:20:47.984374 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/dc31182a-5394-4703-b119-ee5a9742f990-builder-dockercfg-56v9d-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 20 00:20:47 crc kubenswrapper[5107]: I0220 00:20:47.984420 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dc31182a-5394-4703-b119-ee5a9742f990-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 20 00:20:47 crc kubenswrapper[5107]: I0220 00:20:47.984454 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dc31182a-5394-4703-b119-ee5a9742f990-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 20 00:20:47 crc kubenswrapper[5107]: I0220 00:20:47.984616 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dc31182a-5394-4703-b119-ee5a9742f990-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 20 00:20:47 crc kubenswrapper[5107]: I0220 00:20:47.984675 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/dc31182a-5394-4703-b119-ee5a9742f990-builder-dockercfg-56v9d-push\") pod \"service-telemetry-operator-2-build\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 20 00:20:47 crc kubenswrapper[5107]: I0220 00:20:47.984856 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dc31182a-5394-4703-b119-ee5a9742f990-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 20 00:20:47 crc kubenswrapper[5107]: I0220 00:20:47.984950 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dc31182a-5394-4703-b119-ee5a9742f990-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 20 00:20:47 crc kubenswrapper[5107]: I0220 00:20:47.985041 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2g4s\" (UniqueName: \"kubernetes.io/projected/dc31182a-5394-4703-b119-ee5a9742f990-kube-api-access-l2g4s\") pod \"service-telemetry-operator-2-build\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.087092 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/dc31182a-5394-4703-b119-ee5a9742f990-builder-dockercfg-56v9d-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.087203 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dc31182a-5394-4703-b119-ee5a9742f990-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.087237 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dc31182a-5394-4703-b119-ee5a9742f990-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.087326 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dc31182a-5394-4703-b119-ee5a9742f990-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.087402 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/dc31182a-5394-4703-b119-ee5a9742f990-builder-dockercfg-56v9d-push\") pod \"service-telemetry-operator-2-build\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.087488 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dc31182a-5394-4703-b119-ee5a9742f990-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.087546 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dc31182a-5394-4703-b119-ee5a9742f990-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.087630 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l2g4s\" (UniqueName: \"kubernetes.io/projected/dc31182a-5394-4703-b119-ee5a9742f990-kube-api-access-l2g4s\") pod \"service-telemetry-operator-2-build\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.088281 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dc31182a-5394-4703-b119-ee5a9742f990-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.088283 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dc31182a-5394-4703-b119-ee5a9742f990-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.088432 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dc31182a-5394-4703-b119-ee5a9742f990-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.088439 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dc31182a-5394-4703-b119-ee5a9742f990-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.088497 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dc31182a-5394-4703-b119-ee5a9742f990-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.088539 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dc31182a-5394-4703-b119-ee5a9742f990-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.088598 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dc31182a-5394-4703-b119-ee5a9742f990-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.088539 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dc31182a-5394-4703-b119-ee5a9742f990-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.088800 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dc31182a-5394-4703-b119-ee5a9742f990-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.089261 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dc31182a-5394-4703-b119-ee5a9742f990-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.089288 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dc31182a-5394-4703-b119-ee5a9742f990-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.089390 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dc31182a-5394-4703-b119-ee5a9742f990-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.089594 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dc31182a-5394-4703-b119-ee5a9742f990-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.094245 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/dc31182a-5394-4703-b119-ee5a9742f990-builder-dockercfg-56v9d-push\") pod \"service-telemetry-operator-2-build\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.095779 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/dc31182a-5394-4703-b119-ee5a9742f990-builder-dockercfg-56v9d-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.121971 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2g4s\" (UniqueName: \"kubernetes.io/projected/dc31182a-5394-4703-b119-ee5a9742f990-kube-api-access-l2g4s\") pod \"service-telemetry-operator-2-build\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.209858 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.419712 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_4fd7093c-854b-4f5a-affc-21412edec91d/docker-build/0.log" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.420063 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.452552 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_4fd7093c-854b-4f5a-affc-21412edec91d/docker-build/0.log" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.452854 5107 generic.go:358] "Generic (PLEG): container finished" podID="4fd7093c-854b-4f5a-affc-21412edec91d" containerID="4326d4cf44e3e9459e7f9595d0bdf90fa7e58636beea86d375f8b328d046b446" exitCode=1 Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.452936 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.453084 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"4fd7093c-854b-4f5a-affc-21412edec91d","Type":"ContainerDied","Data":"4326d4cf44e3e9459e7f9595d0bdf90fa7e58636beea86d375f8b328d046b446"} Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.453118 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"4fd7093c-854b-4f5a-affc-21412edec91d","Type":"ContainerDied","Data":"3d382ead0b89e8dc9ae7a53792ce986f24730d1d42ea527818f6adb49e39e624"} Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.453136 5107 scope.go:117] "RemoveContainer" containerID="4326d4cf44e3e9459e7f9595d0bdf90fa7e58636beea86d375f8b328d046b446" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.474965 5107 scope.go:117] "RemoveContainer" containerID="6f705db1d3d119fa381f57350c8d65e0e07dfa61b1117dc620b8dbb8177742a3" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.541512 5107 scope.go:117] "RemoveContainer" containerID="4326d4cf44e3e9459e7f9595d0bdf90fa7e58636beea86d375f8b328d046b446" Feb 20 00:20:48 crc kubenswrapper[5107]: E0220 00:20:48.541895 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4326d4cf44e3e9459e7f9595d0bdf90fa7e58636beea86d375f8b328d046b446\": container with ID starting with 4326d4cf44e3e9459e7f9595d0bdf90fa7e58636beea86d375f8b328d046b446 not found: ID does not exist" containerID="4326d4cf44e3e9459e7f9595d0bdf90fa7e58636beea86d375f8b328d046b446" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.541940 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4326d4cf44e3e9459e7f9595d0bdf90fa7e58636beea86d375f8b328d046b446"} err="failed to get container status \"4326d4cf44e3e9459e7f9595d0bdf90fa7e58636beea86d375f8b328d046b446\": rpc error: code = NotFound desc = could not find container \"4326d4cf44e3e9459e7f9595d0bdf90fa7e58636beea86d375f8b328d046b446\": container with ID starting with 4326d4cf44e3e9459e7f9595d0bdf90fa7e58636beea86d375f8b328d046b446 not found: ID does not exist" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.541966 5107 scope.go:117] "RemoveContainer" containerID="6f705db1d3d119fa381f57350c8d65e0e07dfa61b1117dc620b8dbb8177742a3" Feb 20 00:20:48 crc kubenswrapper[5107]: E0220 00:20:48.542258 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f705db1d3d119fa381f57350c8d65e0e07dfa61b1117dc620b8dbb8177742a3\": container with ID starting with 6f705db1d3d119fa381f57350c8d65e0e07dfa61b1117dc620b8dbb8177742a3 not found: ID does not exist" containerID="6f705db1d3d119fa381f57350c8d65e0e07dfa61b1117dc620b8dbb8177742a3" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.542309 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f705db1d3d119fa381f57350c8d65e0e07dfa61b1117dc620b8dbb8177742a3"} err="failed to get container status \"6f705db1d3d119fa381f57350c8d65e0e07dfa61b1117dc620b8dbb8177742a3\": rpc error: code = NotFound desc = could not find container \"6f705db1d3d119fa381f57350c8d65e0e07dfa61b1117dc620b8dbb8177742a3\": container with ID starting with 6f705db1d3d119fa381f57350c8d65e0e07dfa61b1117dc620b8dbb8177742a3 not found: ID does not exist" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.601949 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4fd7093c-854b-4f5a-affc-21412edec91d-container-storage-run\") pod \"4fd7093c-854b-4f5a-affc-21412edec91d\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.602009 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4fd7093c-854b-4f5a-affc-21412edec91d-node-pullsecrets\") pod \"4fd7093c-854b-4f5a-affc-21412edec91d\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.602052 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4fd7093c-854b-4f5a-affc-21412edec91d-buildcachedir\") pod \"4fd7093c-854b-4f5a-affc-21412edec91d\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.602068 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4fd7093c-854b-4f5a-affc-21412edec91d-build-ca-bundles\") pod \"4fd7093c-854b-4f5a-affc-21412edec91d\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.602087 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/4fd7093c-854b-4f5a-affc-21412edec91d-builder-dockercfg-56v9d-push\") pod \"4fd7093c-854b-4f5a-affc-21412edec91d\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.602154 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4fd7093c-854b-4f5a-affc-21412edec91d-buildworkdir\") pod \"4fd7093c-854b-4f5a-affc-21412edec91d\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.602195 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4fd7093c-854b-4f5a-affc-21412edec91d-build-blob-cache\") pod \"4fd7093c-854b-4f5a-affc-21412edec91d\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.602249 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4fd7093c-854b-4f5a-affc-21412edec91d-container-storage-root\") pod \"4fd7093c-854b-4f5a-affc-21412edec91d\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.602275 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4fd7093c-854b-4f5a-affc-21412edec91d-build-system-configs\") pod \"4fd7093c-854b-4f5a-affc-21412edec91d\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.602299 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4fd7093c-854b-4f5a-affc-21412edec91d-build-proxy-ca-bundles\") pod \"4fd7093c-854b-4f5a-affc-21412edec91d\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.602317 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tcvk\" (UniqueName: \"kubernetes.io/projected/4fd7093c-854b-4f5a-affc-21412edec91d-kube-api-access-9tcvk\") pod \"4fd7093c-854b-4f5a-affc-21412edec91d\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.602341 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/4fd7093c-854b-4f5a-affc-21412edec91d-builder-dockercfg-56v9d-pull\") pod \"4fd7093c-854b-4f5a-affc-21412edec91d\" (UID: \"4fd7093c-854b-4f5a-affc-21412edec91d\") " Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.603055 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fd7093c-854b-4f5a-affc-21412edec91d-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "4fd7093c-854b-4f5a-affc-21412edec91d" (UID: "4fd7093c-854b-4f5a-affc-21412edec91d"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.603601 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fd7093c-854b-4f5a-affc-21412edec91d-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "4fd7093c-854b-4f5a-affc-21412edec91d" (UID: "4fd7093c-854b-4f5a-affc-21412edec91d"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.603108 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fd7093c-854b-4f5a-affc-21412edec91d-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "4fd7093c-854b-4f5a-affc-21412edec91d" (UID: "4fd7093c-854b-4f5a-affc-21412edec91d"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.603134 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4fd7093c-854b-4f5a-affc-21412edec91d-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "4fd7093c-854b-4f5a-affc-21412edec91d" (UID: "4fd7093c-854b-4f5a-affc-21412edec91d"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.603179 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4fd7093c-854b-4f5a-affc-21412edec91d-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "4fd7093c-854b-4f5a-affc-21412edec91d" (UID: "4fd7093c-854b-4f5a-affc-21412edec91d"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.604270 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fd7093c-854b-4f5a-affc-21412edec91d-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "4fd7093c-854b-4f5a-affc-21412edec91d" (UID: "4fd7093c-854b-4f5a-affc-21412edec91d"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.604418 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fd7093c-854b-4f5a-affc-21412edec91d-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "4fd7093c-854b-4f5a-affc-21412edec91d" (UID: "4fd7093c-854b-4f5a-affc-21412edec91d"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.604633 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fd7093c-854b-4f5a-affc-21412edec91d-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "4fd7093c-854b-4f5a-affc-21412edec91d" (UID: "4fd7093c-854b-4f5a-affc-21412edec91d"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.605215 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fd7093c-854b-4f5a-affc-21412edec91d-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "4fd7093c-854b-4f5a-affc-21412edec91d" (UID: "4fd7093c-854b-4f5a-affc-21412edec91d"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.608830 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fd7093c-854b-4f5a-affc-21412edec91d-builder-dockercfg-56v9d-pull" (OuterVolumeSpecName: "builder-dockercfg-56v9d-pull") pod "4fd7093c-854b-4f5a-affc-21412edec91d" (UID: "4fd7093c-854b-4f5a-affc-21412edec91d"). InnerVolumeSpecName "builder-dockercfg-56v9d-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.608896 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fd7093c-854b-4f5a-affc-21412edec91d-kube-api-access-9tcvk" (OuterVolumeSpecName: "kube-api-access-9tcvk") pod "4fd7093c-854b-4f5a-affc-21412edec91d" (UID: "4fd7093c-854b-4f5a-affc-21412edec91d"). InnerVolumeSpecName "kube-api-access-9tcvk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.609675 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fd7093c-854b-4f5a-affc-21412edec91d-builder-dockercfg-56v9d-push" (OuterVolumeSpecName: "builder-dockercfg-56v9d-push") pod "4fd7093c-854b-4f5a-affc-21412edec91d" (UID: "4fd7093c-854b-4f5a-affc-21412edec91d"). InnerVolumeSpecName "builder-dockercfg-56v9d-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.674344 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.704210 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4fd7093c-854b-4f5a-affc-21412edec91d-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.704242 5107 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4fd7093c-854b-4f5a-affc-21412edec91d-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.704254 5107 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4fd7093c-854b-4f5a-affc-21412edec91d-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.704263 5107 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4fd7093c-854b-4f5a-affc-21412edec91d-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.704272 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/4fd7093c-854b-4f5a-affc-21412edec91d-builder-dockercfg-56v9d-push\") on node \"crc\" DevicePath \"\"" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.704280 5107 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4fd7093c-854b-4f5a-affc-21412edec91d-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.704288 5107 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4fd7093c-854b-4f5a-affc-21412edec91d-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.704297 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4fd7093c-854b-4f5a-affc-21412edec91d-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.704309 5107 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4fd7093c-854b-4f5a-affc-21412edec91d-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.704319 5107 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4fd7093c-854b-4f5a-affc-21412edec91d-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.704328 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9tcvk\" (UniqueName: \"kubernetes.io/projected/4fd7093c-854b-4f5a-affc-21412edec91d-kube-api-access-9tcvk\") on node \"crc\" DevicePath \"\"" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.704338 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/4fd7093c-854b-4f5a-affc-21412edec91d-builder-dockercfg-56v9d-pull\") on node \"crc\" DevicePath \"\"" Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.792335 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 20 00:20:48 crc kubenswrapper[5107]: I0220 00:20:48.798921 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 20 00:20:49 crc kubenswrapper[5107]: I0220 00:20:49.462782 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"dc31182a-5394-4703-b119-ee5a9742f990","Type":"ContainerStarted","Data":"1d52580e9b44627fa3ff69c6115c0b5ff8e1292b834cad1b3dc7a948a58b8b52"} Feb 20 00:20:49 crc kubenswrapper[5107]: I0220 00:20:49.463978 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"dc31182a-5394-4703-b119-ee5a9742f990","Type":"ContainerStarted","Data":"c27a10fb7d373d4ef28cff0ec2af1e5db51b19709854b637efd2336cb40aabcf"} Feb 20 00:20:50 crc kubenswrapper[5107]: I0220 00:20:50.493453 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fd7093c-854b-4f5a-affc-21412edec91d" path="/var/lib/kubelet/pods/4fd7093c-854b-4f5a-affc-21412edec91d/volumes" Feb 20 00:20:50 crc kubenswrapper[5107]: I0220 00:20:50.779343 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Feb 20 00:20:56 crc kubenswrapper[5107]: I0220 00:20:56.511083 5107 generic.go:358] "Generic (PLEG): container finished" podID="dc31182a-5394-4703-b119-ee5a9742f990" containerID="1d52580e9b44627fa3ff69c6115c0b5ff8e1292b834cad1b3dc7a948a58b8b52" exitCode=0 Feb 20 00:20:56 crc kubenswrapper[5107]: I0220 00:20:56.511203 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"dc31182a-5394-4703-b119-ee5a9742f990","Type":"ContainerDied","Data":"1d52580e9b44627fa3ff69c6115c0b5ff8e1292b834cad1b3dc7a948a58b8b52"} Feb 20 00:20:57 crc kubenswrapper[5107]: I0220 00:20:57.522379 5107 generic.go:358] "Generic (PLEG): container finished" podID="dc31182a-5394-4703-b119-ee5a9742f990" containerID="32877490d63d71d34bb51572485dbc98aaaddcc5d3c5c3cc929c48a8ccb96a7d" exitCode=0 Feb 20 00:20:57 crc kubenswrapper[5107]: I0220 00:20:57.522421 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"dc31182a-5394-4703-b119-ee5a9742f990","Type":"ContainerDied","Data":"32877490d63d71d34bb51572485dbc98aaaddcc5d3c5c3cc929c48a8ccb96a7d"} Feb 20 00:20:57 crc kubenswrapper[5107]: I0220 00:20:57.596379 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_dc31182a-5394-4703-b119-ee5a9742f990/manage-dockerfile/0.log" Feb 20 00:20:58 crc kubenswrapper[5107]: I0220 00:20:58.532822 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"dc31182a-5394-4703-b119-ee5a9742f990","Type":"ContainerStarted","Data":"5a4955e5c9c7f78db9f352dbb62f1649b098e5ea6d44ea15df8aa18f9d11b47f"} Feb 20 00:22:00 crc kubenswrapper[5107]: I0220 00:22:00.148522 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-2-build" podStartSLOduration=74.148499152 podStartE2EDuration="1m14.148499152s" podCreationTimestamp="2026-02-20 00:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:20:58.575938901 +0000 UTC m=+744.944596477" watchObservedRunningTime="2026-02-20 00:22:00.148499152 +0000 UTC m=+806.517156738" Feb 20 00:22:00 crc kubenswrapper[5107]: I0220 00:22:00.151687 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29525782-9877z"] Feb 20 00:22:00 crc kubenswrapper[5107]: I0220 00:22:00.152598 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4fd7093c-854b-4f5a-affc-21412edec91d" containerName="docker-build" Feb 20 00:22:00 crc kubenswrapper[5107]: I0220 00:22:00.152626 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fd7093c-854b-4f5a-affc-21412edec91d" containerName="docker-build" Feb 20 00:22:00 crc kubenswrapper[5107]: I0220 00:22:00.152643 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4fd7093c-854b-4f5a-affc-21412edec91d" containerName="manage-dockerfile" Feb 20 00:22:00 crc kubenswrapper[5107]: I0220 00:22:00.152652 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fd7093c-854b-4f5a-affc-21412edec91d" containerName="manage-dockerfile" Feb 20 00:22:00 crc kubenswrapper[5107]: I0220 00:22:00.152792 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="4fd7093c-854b-4f5a-affc-21412edec91d" containerName="docker-build" Feb 20 00:22:00 crc kubenswrapper[5107]: I0220 00:22:00.193381 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525782-9877z"] Feb 20 00:22:00 crc kubenswrapper[5107]: I0220 00:22:00.193533 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525782-9877z" Feb 20 00:22:00 crc kubenswrapper[5107]: I0220 00:22:00.195741 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 20 00:22:00 crc kubenswrapper[5107]: I0220 00:22:00.195996 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 20 00:22:00 crc kubenswrapper[5107]: I0220 00:22:00.197449 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-km7dp\"" Feb 20 00:22:00 crc kubenswrapper[5107]: I0220 00:22:00.349990 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlh2b\" (UniqueName: \"kubernetes.io/projected/15825612-2f1d-4906-839c-0828fdacf8ea-kube-api-access-rlh2b\") pod \"auto-csr-approver-29525782-9877z\" (UID: \"15825612-2f1d-4906-839c-0828fdacf8ea\") " pod="openshift-infra/auto-csr-approver-29525782-9877z" Feb 20 00:22:00 crc kubenswrapper[5107]: I0220 00:22:00.451081 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rlh2b\" (UniqueName: \"kubernetes.io/projected/15825612-2f1d-4906-839c-0828fdacf8ea-kube-api-access-rlh2b\") pod \"auto-csr-approver-29525782-9877z\" (UID: \"15825612-2f1d-4906-839c-0828fdacf8ea\") " pod="openshift-infra/auto-csr-approver-29525782-9877z" Feb 20 00:22:00 crc kubenswrapper[5107]: I0220 00:22:00.473746 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlh2b\" (UniqueName: \"kubernetes.io/projected/15825612-2f1d-4906-839c-0828fdacf8ea-kube-api-access-rlh2b\") pod \"auto-csr-approver-29525782-9877z\" (UID: \"15825612-2f1d-4906-839c-0828fdacf8ea\") " pod="openshift-infra/auto-csr-approver-29525782-9877z" Feb 20 00:22:00 crc kubenswrapper[5107]: I0220 00:22:00.545115 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525782-9877z" Feb 20 00:22:01 crc kubenswrapper[5107]: I0220 00:22:01.040819 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525782-9877z"] Feb 20 00:22:02 crc kubenswrapper[5107]: I0220 00:22:02.025132 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525782-9877z" event={"ID":"15825612-2f1d-4906-839c-0828fdacf8ea","Type":"ContainerStarted","Data":"40a03541d3976497044d60e0bff14ddf761e23535fad8f6a0cfa719380a3eb63"} Feb 20 00:22:03 crc kubenswrapper[5107]: I0220 00:22:03.036738 5107 generic.go:358] "Generic (PLEG): container finished" podID="15825612-2f1d-4906-839c-0828fdacf8ea" containerID="a2bbd7bc3a041d5be8fb720f1c213cb3e498aa9bb14058c57edcaf0aad17e082" exitCode=0 Feb 20 00:22:03 crc kubenswrapper[5107]: I0220 00:22:03.036838 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525782-9877z" event={"ID":"15825612-2f1d-4906-839c-0828fdacf8ea","Type":"ContainerDied","Data":"a2bbd7bc3a041d5be8fb720f1c213cb3e498aa9bb14058c57edcaf0aad17e082"} Feb 20 00:22:04 crc kubenswrapper[5107]: I0220 00:22:04.329472 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525782-9877z" Feb 20 00:22:04 crc kubenswrapper[5107]: I0220 00:22:04.411953 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlh2b\" (UniqueName: \"kubernetes.io/projected/15825612-2f1d-4906-839c-0828fdacf8ea-kube-api-access-rlh2b\") pod \"15825612-2f1d-4906-839c-0828fdacf8ea\" (UID: \"15825612-2f1d-4906-839c-0828fdacf8ea\") " Feb 20 00:22:04 crc kubenswrapper[5107]: I0220 00:22:04.426346 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15825612-2f1d-4906-839c-0828fdacf8ea-kube-api-access-rlh2b" (OuterVolumeSpecName: "kube-api-access-rlh2b") pod "15825612-2f1d-4906-839c-0828fdacf8ea" (UID: "15825612-2f1d-4906-839c-0828fdacf8ea"). InnerVolumeSpecName "kube-api-access-rlh2b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:22:04 crc kubenswrapper[5107]: I0220 00:22:04.513634 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rlh2b\" (UniqueName: \"kubernetes.io/projected/15825612-2f1d-4906-839c-0828fdacf8ea-kube-api-access-rlh2b\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:05 crc kubenswrapper[5107]: I0220 00:22:05.052201 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525782-9877z" event={"ID":"15825612-2f1d-4906-839c-0828fdacf8ea","Type":"ContainerDied","Data":"40a03541d3976497044d60e0bff14ddf761e23535fad8f6a0cfa719380a3eb63"} Feb 20 00:22:05 crc kubenswrapper[5107]: I0220 00:22:05.052479 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40a03541d3976497044d60e0bff14ddf761e23535fad8f6a0cfa719380a3eb63" Feb 20 00:22:05 crc kubenswrapper[5107]: I0220 00:22:05.052250 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525782-9877z" Feb 20 00:22:05 crc kubenswrapper[5107]: I0220 00:22:05.411052 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29525776-ccd67"] Feb 20 00:22:05 crc kubenswrapper[5107]: I0220 00:22:05.418640 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29525776-ccd67"] Feb 20 00:22:06 crc kubenswrapper[5107]: I0220 00:22:06.497755 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6635f88f-79b8-4f8a-a478-c70d4c3b88ff" path="/var/lib/kubelet/pods/6635f88f-79b8-4f8a-a478-c70d4c3b88ff/volumes" Feb 20 00:22:33 crc kubenswrapper[5107]: I0220 00:22:33.258642 5107 generic.go:358] "Generic (PLEG): container finished" podID="dc31182a-5394-4703-b119-ee5a9742f990" containerID="5a4955e5c9c7f78db9f352dbb62f1649b098e5ea6d44ea15df8aa18f9d11b47f" exitCode=0 Feb 20 00:22:33 crc kubenswrapper[5107]: I0220 00:22:33.258748 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"dc31182a-5394-4703-b119-ee5a9742f990","Type":"ContainerDied","Data":"5a4955e5c9c7f78db9f352dbb62f1649b098e5ea6d44ea15df8aa18f9d11b47f"} Feb 20 00:22:34 crc kubenswrapper[5107]: I0220 00:22:34.604041 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Feb 20 00:22:34 crc kubenswrapper[5107]: I0220 00:22:34.663696 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dc31182a-5394-4703-b119-ee5a9742f990-build-blob-cache\") pod \"dc31182a-5394-4703-b119-ee5a9742f990\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " Feb 20 00:22:34 crc kubenswrapper[5107]: I0220 00:22:34.663779 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dc31182a-5394-4703-b119-ee5a9742f990-build-system-configs\") pod \"dc31182a-5394-4703-b119-ee5a9742f990\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " Feb 20 00:22:34 crc kubenswrapper[5107]: I0220 00:22:34.663849 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dc31182a-5394-4703-b119-ee5a9742f990-buildcachedir\") pod \"dc31182a-5394-4703-b119-ee5a9742f990\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " Feb 20 00:22:34 crc kubenswrapper[5107]: I0220 00:22:34.663875 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dc31182a-5394-4703-b119-ee5a9742f990-build-ca-bundles\") pod \"dc31182a-5394-4703-b119-ee5a9742f990\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " Feb 20 00:22:34 crc kubenswrapper[5107]: I0220 00:22:34.663907 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2g4s\" (UniqueName: \"kubernetes.io/projected/dc31182a-5394-4703-b119-ee5a9742f990-kube-api-access-l2g4s\") pod \"dc31182a-5394-4703-b119-ee5a9742f990\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " Feb 20 00:22:34 crc kubenswrapper[5107]: I0220 00:22:34.663945 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dc31182a-5394-4703-b119-ee5a9742f990-container-storage-run\") pod \"dc31182a-5394-4703-b119-ee5a9742f990\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " Feb 20 00:22:34 crc kubenswrapper[5107]: I0220 00:22:34.664023 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dc31182a-5394-4703-b119-ee5a9742f990-node-pullsecrets\") pod \"dc31182a-5394-4703-b119-ee5a9742f990\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " Feb 20 00:22:34 crc kubenswrapper[5107]: I0220 00:22:34.664064 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc31182a-5394-4703-b119-ee5a9742f990-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "dc31182a-5394-4703-b119-ee5a9742f990" (UID: "dc31182a-5394-4703-b119-ee5a9742f990"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:22:34 crc kubenswrapper[5107]: I0220 00:22:34.664072 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dc31182a-5394-4703-b119-ee5a9742f990-buildworkdir\") pod \"dc31182a-5394-4703-b119-ee5a9742f990\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " Feb 20 00:22:34 crc kubenswrapper[5107]: I0220 00:22:34.664121 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dc31182a-5394-4703-b119-ee5a9742f990-container-storage-root\") pod \"dc31182a-5394-4703-b119-ee5a9742f990\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " Feb 20 00:22:34 crc kubenswrapper[5107]: I0220 00:22:34.664130 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc31182a-5394-4703-b119-ee5a9742f990-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "dc31182a-5394-4703-b119-ee5a9742f990" (UID: "dc31182a-5394-4703-b119-ee5a9742f990"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:22:34 crc kubenswrapper[5107]: I0220 00:22:34.664177 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/dc31182a-5394-4703-b119-ee5a9742f990-builder-dockercfg-56v9d-pull\") pod \"dc31182a-5394-4703-b119-ee5a9742f990\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " Feb 20 00:22:34 crc kubenswrapper[5107]: I0220 00:22:34.664290 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dc31182a-5394-4703-b119-ee5a9742f990-build-proxy-ca-bundles\") pod \"dc31182a-5394-4703-b119-ee5a9742f990\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " Feb 20 00:22:34 crc kubenswrapper[5107]: I0220 00:22:34.664326 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/dc31182a-5394-4703-b119-ee5a9742f990-builder-dockercfg-56v9d-push\") pod \"dc31182a-5394-4703-b119-ee5a9742f990\" (UID: \"dc31182a-5394-4703-b119-ee5a9742f990\") " Feb 20 00:22:34 crc kubenswrapper[5107]: I0220 00:22:34.664876 5107 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dc31182a-5394-4703-b119-ee5a9742f990-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:34 crc kubenswrapper[5107]: I0220 00:22:34.664900 5107 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dc31182a-5394-4703-b119-ee5a9742f990-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:34 crc kubenswrapper[5107]: I0220 00:22:34.666203 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc31182a-5394-4703-b119-ee5a9742f990-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "dc31182a-5394-4703-b119-ee5a9742f990" (UID: "dc31182a-5394-4703-b119-ee5a9742f990"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:22:34 crc kubenswrapper[5107]: I0220 00:22:34.666680 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc31182a-5394-4703-b119-ee5a9742f990-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "dc31182a-5394-4703-b119-ee5a9742f990" (UID: "dc31182a-5394-4703-b119-ee5a9742f990"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:22:34 crc kubenswrapper[5107]: I0220 00:22:34.667187 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc31182a-5394-4703-b119-ee5a9742f990-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "dc31182a-5394-4703-b119-ee5a9742f990" (UID: "dc31182a-5394-4703-b119-ee5a9742f990"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:22:34 crc kubenswrapper[5107]: I0220 00:22:34.667218 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc31182a-5394-4703-b119-ee5a9742f990-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "dc31182a-5394-4703-b119-ee5a9742f990" (UID: "dc31182a-5394-4703-b119-ee5a9742f990"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:22:34 crc kubenswrapper[5107]: I0220 00:22:34.670539 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc31182a-5394-4703-b119-ee5a9742f990-kube-api-access-l2g4s" (OuterVolumeSpecName: "kube-api-access-l2g4s") pod "dc31182a-5394-4703-b119-ee5a9742f990" (UID: "dc31182a-5394-4703-b119-ee5a9742f990"). InnerVolumeSpecName "kube-api-access-l2g4s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:22:34 crc kubenswrapper[5107]: I0220 00:22:34.671207 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc31182a-5394-4703-b119-ee5a9742f990-builder-dockercfg-56v9d-push" (OuterVolumeSpecName: "builder-dockercfg-56v9d-push") pod "dc31182a-5394-4703-b119-ee5a9742f990" (UID: "dc31182a-5394-4703-b119-ee5a9742f990"). InnerVolumeSpecName "builder-dockercfg-56v9d-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:22:34 crc kubenswrapper[5107]: I0220 00:22:34.676548 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc31182a-5394-4703-b119-ee5a9742f990-builder-dockercfg-56v9d-pull" (OuterVolumeSpecName: "builder-dockercfg-56v9d-pull") pod "dc31182a-5394-4703-b119-ee5a9742f990" (UID: "dc31182a-5394-4703-b119-ee5a9742f990"). InnerVolumeSpecName "builder-dockercfg-56v9d-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:22:34 crc kubenswrapper[5107]: I0220 00:22:34.719063 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc31182a-5394-4703-b119-ee5a9742f990-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "dc31182a-5394-4703-b119-ee5a9742f990" (UID: "dc31182a-5394-4703-b119-ee5a9742f990"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:22:34 crc kubenswrapper[5107]: I0220 00:22:34.765637 5107 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dc31182a-5394-4703-b119-ee5a9742f990-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:34 crc kubenswrapper[5107]: I0220 00:22:34.765666 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l2g4s\" (UniqueName: \"kubernetes.io/projected/dc31182a-5394-4703-b119-ee5a9742f990-kube-api-access-l2g4s\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:34 crc kubenswrapper[5107]: I0220 00:22:34.765676 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dc31182a-5394-4703-b119-ee5a9742f990-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:34 crc kubenswrapper[5107]: I0220 00:22:34.765687 5107 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dc31182a-5394-4703-b119-ee5a9742f990-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:34 crc kubenswrapper[5107]: I0220 00:22:34.765696 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/dc31182a-5394-4703-b119-ee5a9742f990-builder-dockercfg-56v9d-pull\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:34 crc kubenswrapper[5107]: I0220 00:22:34.765704 5107 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dc31182a-5394-4703-b119-ee5a9742f990-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:34 crc kubenswrapper[5107]: I0220 00:22:34.765712 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/dc31182a-5394-4703-b119-ee5a9742f990-builder-dockercfg-56v9d-push\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:34 crc kubenswrapper[5107]: I0220 00:22:34.765719 5107 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dc31182a-5394-4703-b119-ee5a9742f990-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:34 crc kubenswrapper[5107]: I0220 00:22:34.866750 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc31182a-5394-4703-b119-ee5a9742f990-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "dc31182a-5394-4703-b119-ee5a9742f990" (UID: "dc31182a-5394-4703-b119-ee5a9742f990"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:22:34 crc kubenswrapper[5107]: I0220 00:22:34.968631 5107 scope.go:117] "RemoveContainer" containerID="224390b7f1c9d11a0037e1fe378c663a18c1b5ce00d89266197d92fa2ecd5883" Feb 20 00:22:34 crc kubenswrapper[5107]: I0220 00:22:34.968765 5107 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dc31182a-5394-4703-b119-ee5a9742f990-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:35 crc kubenswrapper[5107]: I0220 00:22:35.277803 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"dc31182a-5394-4703-b119-ee5a9742f990","Type":"ContainerDied","Data":"c27a10fb7d373d4ef28cff0ec2af1e5db51b19709854b637efd2336cb40aabcf"} Feb 20 00:22:35 crc kubenswrapper[5107]: I0220 00:22:35.277880 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c27a10fb7d373d4ef28cff0ec2af1e5db51b19709854b637efd2336cb40aabcf" Feb 20 00:22:35 crc kubenswrapper[5107]: I0220 00:22:35.277821 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Feb 20 00:22:36 crc kubenswrapper[5107]: I0220 00:22:36.670926 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc31182a-5394-4703-b119-ee5a9742f990-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "dc31182a-5394-4703-b119-ee5a9742f990" (UID: "dc31182a-5394-4703-b119-ee5a9742f990"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:22:36 crc kubenswrapper[5107]: I0220 00:22:36.690420 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dc31182a-5394-4703-b119-ee5a9742f990-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.421857 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.422881 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="15825612-2f1d-4906-839c-0828fdacf8ea" containerName="oc" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.422897 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="15825612-2f1d-4906-839c-0828fdacf8ea" containerName="oc" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.422908 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dc31182a-5394-4703-b119-ee5a9742f990" containerName="manage-dockerfile" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.422913 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc31182a-5394-4703-b119-ee5a9742f990" containerName="manage-dockerfile" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.422942 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dc31182a-5394-4703-b119-ee5a9742f990" containerName="git-clone" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.422949 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc31182a-5394-4703-b119-ee5a9742f990" containerName="git-clone" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.422960 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dc31182a-5394-4703-b119-ee5a9742f990" containerName="docker-build" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.422966 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc31182a-5394-4703-b119-ee5a9742f990" containerName="docker-build" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.423066 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="15825612-2f1d-4906-839c-0828fdacf8ea" containerName="oc" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.423076 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="dc31182a-5394-4703-b119-ee5a9742f990" containerName="docker-build" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.565969 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.566559 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.569427 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-1-sys-config\"" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.569503 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-1-ca\"" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.569948 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-56v9d\"" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.570646 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-1-global-ca\"" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.613832 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knw6c\" (UniqueName: \"kubernetes.io/projected/c2444b57-cdc8-4c20-89c5-f67a8d878548-kube-api-access-knw6c\") pod \"smart-gateway-operator-1-build\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.613914 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/c2444b57-cdc8-4c20-89c5-f67a8d878548-builder-dockercfg-56v9d-push\") pod \"smart-gateway-operator-1-build\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.613963 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c2444b57-cdc8-4c20-89c5-f67a8d878548-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.614002 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c2444b57-cdc8-4c20-89c5-f67a8d878548-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.614040 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/c2444b57-cdc8-4c20-89c5-f67a8d878548-builder-dockercfg-56v9d-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.614076 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c2444b57-cdc8-4c20-89c5-f67a8d878548-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.614222 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c2444b57-cdc8-4c20-89c5-f67a8d878548-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.614258 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c2444b57-cdc8-4c20-89c5-f67a8d878548-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.614304 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c2444b57-cdc8-4c20-89c5-f67a8d878548-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.614359 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c2444b57-cdc8-4c20-89c5-f67a8d878548-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.614438 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c2444b57-cdc8-4c20-89c5-f67a8d878548-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.614475 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c2444b57-cdc8-4c20-89c5-f67a8d878548-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.715425 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c2444b57-cdc8-4c20-89c5-f67a8d878548-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.715465 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c2444b57-cdc8-4c20-89c5-f67a8d878548-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.715488 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c2444b57-cdc8-4c20-89c5-f67a8d878548-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.715509 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c2444b57-cdc8-4c20-89c5-f67a8d878548-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.715748 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c2444b57-cdc8-4c20-89c5-f67a8d878548-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.715825 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c2444b57-cdc8-4c20-89c5-f67a8d878548-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.715839 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c2444b57-cdc8-4c20-89c5-f67a8d878548-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.715974 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c2444b57-cdc8-4c20-89c5-f67a8d878548-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.715997 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-knw6c\" (UniqueName: \"kubernetes.io/projected/c2444b57-cdc8-4c20-89c5-f67a8d878548-kube-api-access-knw6c\") pod \"smart-gateway-operator-1-build\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.716053 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/c2444b57-cdc8-4c20-89c5-f67a8d878548-builder-dockercfg-56v9d-push\") pod \"smart-gateway-operator-1-build\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.716107 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c2444b57-cdc8-4c20-89c5-f67a8d878548-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.716187 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c2444b57-cdc8-4c20-89c5-f67a8d878548-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.716279 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/c2444b57-cdc8-4c20-89c5-f67a8d878548-builder-dockercfg-56v9d-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.716317 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c2444b57-cdc8-4c20-89c5-f67a8d878548-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.716643 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c2444b57-cdc8-4c20-89c5-f67a8d878548-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.716811 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c2444b57-cdc8-4c20-89c5-f67a8d878548-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.717017 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c2444b57-cdc8-4c20-89c5-f67a8d878548-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.718656 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c2444b57-cdc8-4c20-89c5-f67a8d878548-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.719099 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c2444b57-cdc8-4c20-89c5-f67a8d878548-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.719334 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c2444b57-cdc8-4c20-89c5-f67a8d878548-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.720324 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c2444b57-cdc8-4c20-89c5-f67a8d878548-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.724505 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/c2444b57-cdc8-4c20-89c5-f67a8d878548-builder-dockercfg-56v9d-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.728386 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/c2444b57-cdc8-4c20-89c5-f67a8d878548-builder-dockercfg-56v9d-push\") pod \"smart-gateway-operator-1-build\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.745258 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-knw6c\" (UniqueName: \"kubernetes.io/projected/c2444b57-cdc8-4c20-89c5-f67a8d878548-kube-api-access-knw6c\") pod \"smart-gateway-operator-1-build\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 20 00:22:38 crc kubenswrapper[5107]: I0220 00:22:38.896840 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Feb 20 00:22:39 crc kubenswrapper[5107]: I0220 00:22:39.322196 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Feb 20 00:22:40 crc kubenswrapper[5107]: I0220 00:22:40.326544 5107 generic.go:358] "Generic (PLEG): container finished" podID="c2444b57-cdc8-4c20-89c5-f67a8d878548" containerID="c5142d3031a98cdd2ab8222714bdbf28510025bfdb62d8d508be69f479ab7049" exitCode=0 Feb 20 00:22:40 crc kubenswrapper[5107]: I0220 00:22:40.326617 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"c2444b57-cdc8-4c20-89c5-f67a8d878548","Type":"ContainerDied","Data":"c5142d3031a98cdd2ab8222714bdbf28510025bfdb62d8d508be69f479ab7049"} Feb 20 00:22:40 crc kubenswrapper[5107]: I0220 00:22:40.326705 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"c2444b57-cdc8-4c20-89c5-f67a8d878548","Type":"ContainerStarted","Data":"dc72df3e4535fbd7c9579e14238ec8e8e366b6550aaf6ce1f85cb57d4e473f79"} Feb 20 00:22:41 crc kubenswrapper[5107]: I0220 00:22:41.339910 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"c2444b57-cdc8-4c20-89c5-f67a8d878548","Type":"ContainerStarted","Data":"aeb31905c8ecf8fd6d46726168c78413b0314ec652727aa6227081a2384385b5"} Feb 20 00:22:41 crc kubenswrapper[5107]: I0220 00:22:41.382565 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-1-build" podStartSLOduration=3.382544032 podStartE2EDuration="3.382544032s" podCreationTimestamp="2026-02-20 00:22:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:22:41.378554659 +0000 UTC m=+847.747212235" watchObservedRunningTime="2026-02-20 00:22:41.382544032 +0000 UTC m=+847.751201628" Feb 20 00:22:49 crc kubenswrapper[5107]: I0220 00:22:49.032079 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Feb 20 00:22:49 crc kubenswrapper[5107]: I0220 00:22:49.033432 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/smart-gateway-operator-1-build" podUID="c2444b57-cdc8-4c20-89c5-f67a8d878548" containerName="docker-build" containerID="cri-o://aeb31905c8ecf8fd6d46726168c78413b0314ec652727aa6227081a2384385b5" gracePeriod=30 Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.414397 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_c2444b57-cdc8-4c20-89c5-f67a8d878548/docker-build/0.log" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.415705 5107 generic.go:358] "Generic (PLEG): container finished" podID="c2444b57-cdc8-4c20-89c5-f67a8d878548" containerID="aeb31905c8ecf8fd6d46726168c78413b0314ec652727aa6227081a2384385b5" exitCode=1 Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.416014 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"c2444b57-cdc8-4c20-89c5-f67a8d878548","Type":"ContainerDied","Data":"aeb31905c8ecf8fd6d46726168c78413b0314ec652727aa6227081a2384385b5"} Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.574944 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_c2444b57-cdc8-4c20-89c5-f67a8d878548/docker-build/0.log" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.575276 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.714713 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c2444b57-cdc8-4c20-89c5-f67a8d878548-buildcachedir\") pod \"c2444b57-cdc8-4c20-89c5-f67a8d878548\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.714771 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/c2444b57-cdc8-4c20-89c5-f67a8d878548-builder-dockercfg-56v9d-pull\") pod \"c2444b57-cdc8-4c20-89c5-f67a8d878548\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.714813 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c2444b57-cdc8-4c20-89c5-f67a8d878548-build-ca-bundles\") pod \"c2444b57-cdc8-4c20-89c5-f67a8d878548\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.714843 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c2444b57-cdc8-4c20-89c5-f67a8d878548-container-storage-root\") pod \"c2444b57-cdc8-4c20-89c5-f67a8d878548\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.714890 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c2444b57-cdc8-4c20-89c5-f67a8d878548-container-storage-run\") pod \"c2444b57-cdc8-4c20-89c5-f67a8d878548\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.714922 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c2444b57-cdc8-4c20-89c5-f67a8d878548-node-pullsecrets\") pod \"c2444b57-cdc8-4c20-89c5-f67a8d878548\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.714953 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c2444b57-cdc8-4c20-89c5-f67a8d878548-build-proxy-ca-bundles\") pod \"c2444b57-cdc8-4c20-89c5-f67a8d878548\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.715036 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c2444b57-cdc8-4c20-89c5-f67a8d878548-buildworkdir\") pod \"c2444b57-cdc8-4c20-89c5-f67a8d878548\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.715107 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knw6c\" (UniqueName: \"kubernetes.io/projected/c2444b57-cdc8-4c20-89c5-f67a8d878548-kube-api-access-knw6c\") pod \"c2444b57-cdc8-4c20-89c5-f67a8d878548\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.715173 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/c2444b57-cdc8-4c20-89c5-f67a8d878548-builder-dockercfg-56v9d-push\") pod \"c2444b57-cdc8-4c20-89c5-f67a8d878548\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.715216 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c2444b57-cdc8-4c20-89c5-f67a8d878548-build-system-configs\") pod \"c2444b57-cdc8-4c20-89c5-f67a8d878548\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.715293 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c2444b57-cdc8-4c20-89c5-f67a8d878548-build-blob-cache\") pod \"c2444b57-cdc8-4c20-89c5-f67a8d878548\" (UID: \"c2444b57-cdc8-4c20-89c5-f67a8d878548\") " Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.716672 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2444b57-cdc8-4c20-89c5-f67a8d878548-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "c2444b57-cdc8-4c20-89c5-f67a8d878548" (UID: "c2444b57-cdc8-4c20-89c5-f67a8d878548"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.716738 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2444b57-cdc8-4c20-89c5-f67a8d878548-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "c2444b57-cdc8-4c20-89c5-f67a8d878548" (UID: "c2444b57-cdc8-4c20-89c5-f67a8d878548"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.720585 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2444b57-cdc8-4c20-89c5-f67a8d878548-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "c2444b57-cdc8-4c20-89c5-f67a8d878548" (UID: "c2444b57-cdc8-4c20-89c5-f67a8d878548"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.721421 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2444b57-cdc8-4c20-89c5-f67a8d878548-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "c2444b57-cdc8-4c20-89c5-f67a8d878548" (UID: "c2444b57-cdc8-4c20-89c5-f67a8d878548"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.722486 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2444b57-cdc8-4c20-89c5-f67a8d878548-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "c2444b57-cdc8-4c20-89c5-f67a8d878548" (UID: "c2444b57-cdc8-4c20-89c5-f67a8d878548"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.722485 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2444b57-cdc8-4c20-89c5-f67a8d878548-builder-dockercfg-56v9d-push" (OuterVolumeSpecName: "builder-dockercfg-56v9d-push") pod "c2444b57-cdc8-4c20-89c5-f67a8d878548" (UID: "c2444b57-cdc8-4c20-89c5-f67a8d878548"). InnerVolumeSpecName "builder-dockercfg-56v9d-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.723021 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2444b57-cdc8-4c20-89c5-f67a8d878548-builder-dockercfg-56v9d-pull" (OuterVolumeSpecName: "builder-dockercfg-56v9d-pull") pod "c2444b57-cdc8-4c20-89c5-f67a8d878548" (UID: "c2444b57-cdc8-4c20-89c5-f67a8d878548"). InnerVolumeSpecName "builder-dockercfg-56v9d-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.723203 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2444b57-cdc8-4c20-89c5-f67a8d878548-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "c2444b57-cdc8-4c20-89c5-f67a8d878548" (UID: "c2444b57-cdc8-4c20-89c5-f67a8d878548"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.723344 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2444b57-cdc8-4c20-89c5-f67a8d878548-kube-api-access-knw6c" (OuterVolumeSpecName: "kube-api-access-knw6c") pod "c2444b57-cdc8-4c20-89c5-f67a8d878548" (UID: "c2444b57-cdc8-4c20-89c5-f67a8d878548"). InnerVolumeSpecName "kube-api-access-knw6c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.726421 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2444b57-cdc8-4c20-89c5-f67a8d878548-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "c2444b57-cdc8-4c20-89c5-f67a8d878548" (UID: "c2444b57-cdc8-4c20-89c5-f67a8d878548"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.727186 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2444b57-cdc8-4c20-89c5-f67a8d878548-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "c2444b57-cdc8-4c20-89c5-f67a8d878548" (UID: "c2444b57-cdc8-4c20-89c5-f67a8d878548"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.782833 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.785370 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c2444b57-cdc8-4c20-89c5-f67a8d878548" containerName="docker-build" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.785587 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2444b57-cdc8-4c20-89c5-f67a8d878548" containerName="docker-build" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.785729 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c2444b57-cdc8-4c20-89c5-f67a8d878548" containerName="manage-dockerfile" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.785835 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2444b57-cdc8-4c20-89c5-f67a8d878548" containerName="manage-dockerfile" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.787392 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="c2444b57-cdc8-4c20-89c5-f67a8d878548" containerName="docker-build" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.793612 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.793746 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.796443 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-2-sys-config\"" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.796517 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-2-ca\"" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.796804 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-2-global-ca\"" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.817805 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-knw6c\" (UniqueName: \"kubernetes.io/projected/c2444b57-cdc8-4c20-89c5-f67a8d878548-kube-api-access-knw6c\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.818235 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/c2444b57-cdc8-4c20-89c5-f67a8d878548-builder-dockercfg-56v9d-push\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.818253 5107 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c2444b57-cdc8-4c20-89c5-f67a8d878548-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.818268 5107 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c2444b57-cdc8-4c20-89c5-f67a8d878548-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.818284 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/c2444b57-cdc8-4c20-89c5-f67a8d878548-builder-dockercfg-56v9d-pull\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.818300 5107 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c2444b57-cdc8-4c20-89c5-f67a8d878548-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.818314 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c2444b57-cdc8-4c20-89c5-f67a8d878548-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.818330 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c2444b57-cdc8-4c20-89c5-f67a8d878548-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.818344 5107 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c2444b57-cdc8-4c20-89c5-f67a8d878548-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.818358 5107 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c2444b57-cdc8-4c20-89c5-f67a8d878548-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.818373 5107 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c2444b57-cdc8-4c20-89c5-f67a8d878548-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.903947 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2444b57-cdc8-4c20-89c5-f67a8d878548-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "c2444b57-cdc8-4c20-89c5-f67a8d878548" (UID: "c2444b57-cdc8-4c20-89c5-f67a8d878548"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.919694 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/102ebadc-175f-4435-957c-5b47a0368962-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.919736 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/102ebadc-175f-4435-957c-5b47a0368962-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.919766 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/102ebadc-175f-4435-957c-5b47a0368962-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.919904 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/102ebadc-175f-4435-957c-5b47a0368962-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.920052 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxzzg\" (UniqueName: \"kubernetes.io/projected/102ebadc-175f-4435-957c-5b47a0368962-kube-api-access-wxzzg\") pod \"smart-gateway-operator-2-build\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.920094 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/102ebadc-175f-4435-957c-5b47a0368962-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.920195 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/102ebadc-175f-4435-957c-5b47a0368962-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.920232 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/102ebadc-175f-4435-957c-5b47a0368962-builder-dockercfg-56v9d-push\") pod \"smart-gateway-operator-2-build\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.920330 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/102ebadc-175f-4435-957c-5b47a0368962-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.920404 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/102ebadc-175f-4435-957c-5b47a0368962-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.920466 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/102ebadc-175f-4435-957c-5b47a0368962-builder-dockercfg-56v9d-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.920599 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/102ebadc-175f-4435-957c-5b47a0368962-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 20 00:22:50 crc kubenswrapper[5107]: I0220 00:22:50.920763 5107 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c2444b57-cdc8-4c20-89c5-f67a8d878548-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 20 00:22:51 crc kubenswrapper[5107]: I0220 00:22:51.021699 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/102ebadc-175f-4435-957c-5b47a0368962-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 20 00:22:51 crc kubenswrapper[5107]: I0220 00:22:51.021748 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/102ebadc-175f-4435-957c-5b47a0368962-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 20 00:22:51 crc kubenswrapper[5107]: I0220 00:22:51.021766 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/102ebadc-175f-4435-957c-5b47a0368962-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 20 00:22:51 crc kubenswrapper[5107]: I0220 00:22:51.021824 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/102ebadc-175f-4435-957c-5b47a0368962-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 20 00:22:51 crc kubenswrapper[5107]: I0220 00:22:51.021858 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/102ebadc-175f-4435-957c-5b47a0368962-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 20 00:22:51 crc kubenswrapper[5107]: I0220 00:22:51.021884 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/102ebadc-175f-4435-957c-5b47a0368962-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 20 00:22:51 crc kubenswrapper[5107]: I0220 00:22:51.021913 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wxzzg\" (UniqueName: \"kubernetes.io/projected/102ebadc-175f-4435-957c-5b47a0368962-kube-api-access-wxzzg\") pod \"smart-gateway-operator-2-build\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 20 00:22:51 crc kubenswrapper[5107]: I0220 00:22:51.021933 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/102ebadc-175f-4435-957c-5b47a0368962-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 20 00:22:51 crc kubenswrapper[5107]: I0220 00:22:51.021958 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/102ebadc-175f-4435-957c-5b47a0368962-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 20 00:22:51 crc kubenswrapper[5107]: I0220 00:22:51.021977 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/102ebadc-175f-4435-957c-5b47a0368962-builder-dockercfg-56v9d-push\") pod \"smart-gateway-operator-2-build\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 20 00:22:51 crc kubenswrapper[5107]: I0220 00:22:51.021997 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/102ebadc-175f-4435-957c-5b47a0368962-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 20 00:22:51 crc kubenswrapper[5107]: I0220 00:22:51.022014 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/102ebadc-175f-4435-957c-5b47a0368962-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 20 00:22:51 crc kubenswrapper[5107]: I0220 00:22:51.022036 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/102ebadc-175f-4435-957c-5b47a0368962-builder-dockercfg-56v9d-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 20 00:22:51 crc kubenswrapper[5107]: I0220 00:22:51.022410 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/102ebadc-175f-4435-957c-5b47a0368962-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 20 00:22:51 crc kubenswrapper[5107]: I0220 00:22:51.022499 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/102ebadc-175f-4435-957c-5b47a0368962-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 20 00:22:51 crc kubenswrapper[5107]: I0220 00:22:51.022793 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/102ebadc-175f-4435-957c-5b47a0368962-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 20 00:22:51 crc kubenswrapper[5107]: I0220 00:22:51.023202 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/102ebadc-175f-4435-957c-5b47a0368962-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 20 00:22:51 crc kubenswrapper[5107]: I0220 00:22:51.023289 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/102ebadc-175f-4435-957c-5b47a0368962-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 20 00:22:51 crc kubenswrapper[5107]: I0220 00:22:51.023461 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/102ebadc-175f-4435-957c-5b47a0368962-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 20 00:22:51 crc kubenswrapper[5107]: I0220 00:22:51.023826 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/102ebadc-175f-4435-957c-5b47a0368962-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 20 00:22:51 crc kubenswrapper[5107]: I0220 00:22:51.023990 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/102ebadc-175f-4435-957c-5b47a0368962-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 20 00:22:51 crc kubenswrapper[5107]: I0220 00:22:51.026647 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/102ebadc-175f-4435-957c-5b47a0368962-builder-dockercfg-56v9d-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 20 00:22:51 crc kubenswrapper[5107]: I0220 00:22:51.028787 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/102ebadc-175f-4435-957c-5b47a0368962-builder-dockercfg-56v9d-push\") pod \"smart-gateway-operator-2-build\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 20 00:22:51 crc kubenswrapper[5107]: I0220 00:22:51.050006 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxzzg\" (UniqueName: \"kubernetes.io/projected/102ebadc-175f-4435-957c-5b47a0368962-kube-api-access-wxzzg\") pod \"smart-gateway-operator-2-build\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 20 00:22:51 crc kubenswrapper[5107]: I0220 00:22:51.118583 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Feb 20 00:22:51 crc kubenswrapper[5107]: I0220 00:22:51.401692 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Feb 20 00:22:51 crc kubenswrapper[5107]: W0220 00:22:51.410866 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod102ebadc_175f_4435_957c_5b47a0368962.slice/crio-8d0da6f8e061f45c465ad1a8c739162440de0f6013812e84c22d834f6b24e509 WatchSource:0}: Error finding container 8d0da6f8e061f45c465ad1a8c739162440de0f6013812e84c22d834f6b24e509: Status 404 returned error can't find the container with id 8d0da6f8e061f45c465ad1a8c739162440de0f6013812e84c22d834f6b24e509 Feb 20 00:22:51 crc kubenswrapper[5107]: I0220 00:22:51.423329 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"102ebadc-175f-4435-957c-5b47a0368962","Type":"ContainerStarted","Data":"8d0da6f8e061f45c465ad1a8c739162440de0f6013812e84c22d834f6b24e509"} Feb 20 00:22:51 crc kubenswrapper[5107]: I0220 00:22:51.424985 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_c2444b57-cdc8-4c20-89c5-f67a8d878548/docker-build/0.log" Feb 20 00:22:51 crc kubenswrapper[5107]: I0220 00:22:51.425488 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Feb 20 00:22:51 crc kubenswrapper[5107]: I0220 00:22:51.425522 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"c2444b57-cdc8-4c20-89c5-f67a8d878548","Type":"ContainerDied","Data":"dc72df3e4535fbd7c9579e14238ec8e8e366b6550aaf6ce1f85cb57d4e473f79"} Feb 20 00:22:51 crc kubenswrapper[5107]: I0220 00:22:51.425587 5107 scope.go:117] "RemoveContainer" containerID="aeb31905c8ecf8fd6d46726168c78413b0314ec652727aa6227081a2384385b5" Feb 20 00:22:51 crc kubenswrapper[5107]: I0220 00:22:51.464983 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Feb 20 00:22:51 crc kubenswrapper[5107]: I0220 00:22:51.466412 5107 scope.go:117] "RemoveContainer" containerID="c5142d3031a98cdd2ab8222714bdbf28510025bfdb62d8d508be69f479ab7049" Feb 20 00:22:51 crc kubenswrapper[5107]: I0220 00:22:51.474855 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Feb 20 00:22:52 crc kubenswrapper[5107]: I0220 00:22:52.433638 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"102ebadc-175f-4435-957c-5b47a0368962","Type":"ContainerStarted","Data":"69e01a56d2a51a09af9a59af448704f7f7f8edf851ef81f75e4fd5472bd4148d"} Feb 20 00:22:52 crc kubenswrapper[5107]: I0220 00:22:52.495278 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2444b57-cdc8-4c20-89c5-f67a8d878548" path="/var/lib/kubelet/pods/c2444b57-cdc8-4c20-89c5-f67a8d878548/volumes" Feb 20 00:22:53 crc kubenswrapper[5107]: I0220 00:22:53.441918 5107 generic.go:358] "Generic (PLEG): container finished" podID="102ebadc-175f-4435-957c-5b47a0368962" containerID="69e01a56d2a51a09af9a59af448704f7f7f8edf851ef81f75e4fd5472bd4148d" exitCode=0 Feb 20 00:22:53 crc kubenswrapper[5107]: I0220 00:22:53.442065 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"102ebadc-175f-4435-957c-5b47a0368962","Type":"ContainerDied","Data":"69e01a56d2a51a09af9a59af448704f7f7f8edf851ef81f75e4fd5472bd4148d"} Feb 20 00:22:54 crc kubenswrapper[5107]: I0220 00:22:54.453379 5107 generic.go:358] "Generic (PLEG): container finished" podID="102ebadc-175f-4435-957c-5b47a0368962" containerID="8521b3678aca5838b7f22f66cab188a994950da27cba05a9b40c0cb306909972" exitCode=0 Feb 20 00:22:54 crc kubenswrapper[5107]: I0220 00:22:54.453608 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"102ebadc-175f-4435-957c-5b47a0368962","Type":"ContainerDied","Data":"8521b3678aca5838b7f22f66cab188a994950da27cba05a9b40c0cb306909972"} Feb 20 00:22:54 crc kubenswrapper[5107]: I0220 00:22:54.515553 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_102ebadc-175f-4435-957c-5b47a0368962/manage-dockerfile/0.log" Feb 20 00:22:55 crc kubenswrapper[5107]: I0220 00:22:55.467176 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"102ebadc-175f-4435-957c-5b47a0368962","Type":"ContainerStarted","Data":"67f22d6d927f4b7abd8d526b4d423909a7fbc06c78fdc64fafb744a5ee6d1460"} Feb 20 00:22:55 crc kubenswrapper[5107]: I0220 00:22:55.520994 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-2-build" podStartSLOduration=5.520968444 podStartE2EDuration="5.520968444s" podCreationTimestamp="2026-02-20 00:22:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:22:55.516448927 +0000 UTC m=+861.885106503" watchObservedRunningTime="2026-02-20 00:22:55.520968444 +0000 UTC m=+861.889626020" Feb 20 00:23:02 crc kubenswrapper[5107]: I0220 00:23:02.824975 5107 patch_prober.go:28] interesting pod/machine-config-daemon-5bqkx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:23:02 crc kubenswrapper[5107]: I0220 00:23:02.825573 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:23:09 crc kubenswrapper[5107]: I0220 00:23:09.611949 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rsh8w"] Feb 20 00:23:09 crc kubenswrapper[5107]: I0220 00:23:09.692867 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rsh8w"] Feb 20 00:23:09 crc kubenswrapper[5107]: I0220 00:23:09.693052 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rsh8w" Feb 20 00:23:09 crc kubenswrapper[5107]: I0220 00:23:09.808678 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73a16802-fe2e-4de3-890d-afda78033632-utilities\") pod \"certified-operators-rsh8w\" (UID: \"73a16802-fe2e-4de3-890d-afda78033632\") " pod="openshift-marketplace/certified-operators-rsh8w" Feb 20 00:23:09 crc kubenswrapper[5107]: I0220 00:23:09.808752 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mm6j\" (UniqueName: \"kubernetes.io/projected/73a16802-fe2e-4de3-890d-afda78033632-kube-api-access-7mm6j\") pod \"certified-operators-rsh8w\" (UID: \"73a16802-fe2e-4de3-890d-afda78033632\") " pod="openshift-marketplace/certified-operators-rsh8w" Feb 20 00:23:09 crc kubenswrapper[5107]: I0220 00:23:09.808843 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73a16802-fe2e-4de3-890d-afda78033632-catalog-content\") pod \"certified-operators-rsh8w\" (UID: \"73a16802-fe2e-4de3-890d-afda78033632\") " pod="openshift-marketplace/certified-operators-rsh8w" Feb 20 00:23:09 crc kubenswrapper[5107]: I0220 00:23:09.911363 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73a16802-fe2e-4de3-890d-afda78033632-catalog-content\") pod \"certified-operators-rsh8w\" (UID: \"73a16802-fe2e-4de3-890d-afda78033632\") " pod="openshift-marketplace/certified-operators-rsh8w" Feb 20 00:23:09 crc kubenswrapper[5107]: I0220 00:23:09.911477 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73a16802-fe2e-4de3-890d-afda78033632-catalog-content\") pod \"certified-operators-rsh8w\" (UID: \"73a16802-fe2e-4de3-890d-afda78033632\") " pod="openshift-marketplace/certified-operators-rsh8w" Feb 20 00:23:09 crc kubenswrapper[5107]: I0220 00:23:09.911669 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73a16802-fe2e-4de3-890d-afda78033632-utilities\") pod \"certified-operators-rsh8w\" (UID: \"73a16802-fe2e-4de3-890d-afda78033632\") " pod="openshift-marketplace/certified-operators-rsh8w" Feb 20 00:23:09 crc kubenswrapper[5107]: I0220 00:23:09.911751 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7mm6j\" (UniqueName: \"kubernetes.io/projected/73a16802-fe2e-4de3-890d-afda78033632-kube-api-access-7mm6j\") pod \"certified-operators-rsh8w\" (UID: \"73a16802-fe2e-4de3-890d-afda78033632\") " pod="openshift-marketplace/certified-operators-rsh8w" Feb 20 00:23:09 crc kubenswrapper[5107]: I0220 00:23:09.912423 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73a16802-fe2e-4de3-890d-afda78033632-utilities\") pod \"certified-operators-rsh8w\" (UID: \"73a16802-fe2e-4de3-890d-afda78033632\") " pod="openshift-marketplace/certified-operators-rsh8w" Feb 20 00:23:09 crc kubenswrapper[5107]: I0220 00:23:09.950783 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mm6j\" (UniqueName: \"kubernetes.io/projected/73a16802-fe2e-4de3-890d-afda78033632-kube-api-access-7mm6j\") pod \"certified-operators-rsh8w\" (UID: \"73a16802-fe2e-4de3-890d-afda78033632\") " pod="openshift-marketplace/certified-operators-rsh8w" Feb 20 00:23:10 crc kubenswrapper[5107]: I0220 00:23:10.013129 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rsh8w" Feb 20 00:23:10 crc kubenswrapper[5107]: I0220 00:23:10.262212 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rsh8w"] Feb 20 00:23:10 crc kubenswrapper[5107]: I0220 00:23:10.585731 5107 generic.go:358] "Generic (PLEG): container finished" podID="73a16802-fe2e-4de3-890d-afda78033632" containerID="39e38fee5dfeeb6857abc30658406f434fd4ceb144c1dfb3a372919f3366082d" exitCode=0 Feb 20 00:23:10 crc kubenswrapper[5107]: I0220 00:23:10.585791 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rsh8w" event={"ID":"73a16802-fe2e-4de3-890d-afda78033632","Type":"ContainerDied","Data":"39e38fee5dfeeb6857abc30658406f434fd4ceb144c1dfb3a372919f3366082d"} Feb 20 00:23:10 crc kubenswrapper[5107]: I0220 00:23:10.586111 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rsh8w" event={"ID":"73a16802-fe2e-4de3-890d-afda78033632","Type":"ContainerStarted","Data":"ad672f4fc800f138288615bb3104a319d7cf185ff16bc04f8cc8ebac69483022"} Feb 20 00:23:11 crc kubenswrapper[5107]: I0220 00:23:11.594278 5107 generic.go:358] "Generic (PLEG): container finished" podID="73a16802-fe2e-4de3-890d-afda78033632" containerID="e2e0fa6d18c6d3780e1a5f2c290388f7bb4a369ca3ceaaf7ed2a2fca33229c93" exitCode=0 Feb 20 00:23:11 crc kubenswrapper[5107]: I0220 00:23:11.594398 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rsh8w" event={"ID":"73a16802-fe2e-4de3-890d-afda78033632","Type":"ContainerDied","Data":"e2e0fa6d18c6d3780e1a5f2c290388f7bb4a369ca3ceaaf7ed2a2fca33229c93"} Feb 20 00:23:12 crc kubenswrapper[5107]: I0220 00:23:12.603587 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rsh8w" event={"ID":"73a16802-fe2e-4de3-890d-afda78033632","Type":"ContainerStarted","Data":"5256d4b64abe821125c1ffa7afd6e0aff0b31ad1e620ee04bc36e1dc6207d30a"} Feb 20 00:23:12 crc kubenswrapper[5107]: I0220 00:23:12.623542 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rsh8w" podStartSLOduration=2.992535635 podStartE2EDuration="3.623526938s" podCreationTimestamp="2026-02-20 00:23:09 +0000 UTC" firstStartedPulling="2026-02-20 00:23:10.586775947 +0000 UTC m=+876.955433513" lastFinishedPulling="2026-02-20 00:23:11.21776724 +0000 UTC m=+877.586424816" observedRunningTime="2026-02-20 00:23:12.619723171 +0000 UTC m=+878.988380747" watchObservedRunningTime="2026-02-20 00:23:12.623526938 +0000 UTC m=+878.992184504" Feb 20 00:23:20 crc kubenswrapper[5107]: I0220 00:23:20.015735 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rsh8w" Feb 20 00:23:20 crc kubenswrapper[5107]: I0220 00:23:20.016489 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-rsh8w" Feb 20 00:23:20 crc kubenswrapper[5107]: I0220 00:23:20.094613 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rsh8w" Feb 20 00:23:20 crc kubenswrapper[5107]: I0220 00:23:20.742811 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rsh8w" Feb 20 00:23:20 crc kubenswrapper[5107]: I0220 00:23:20.794857 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rsh8w"] Feb 20 00:23:22 crc kubenswrapper[5107]: I0220 00:23:22.703236 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rsh8w" podUID="73a16802-fe2e-4de3-890d-afda78033632" containerName="registry-server" containerID="cri-o://5256d4b64abe821125c1ffa7afd6e0aff0b31ad1e620ee04bc36e1dc6207d30a" gracePeriod=2 Feb 20 00:23:23 crc kubenswrapper[5107]: I0220 00:23:23.100578 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rsh8w" Feb 20 00:23:23 crc kubenswrapper[5107]: I0220 00:23:23.220756 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mm6j\" (UniqueName: \"kubernetes.io/projected/73a16802-fe2e-4de3-890d-afda78033632-kube-api-access-7mm6j\") pod \"73a16802-fe2e-4de3-890d-afda78033632\" (UID: \"73a16802-fe2e-4de3-890d-afda78033632\") " Feb 20 00:23:23 crc kubenswrapper[5107]: I0220 00:23:23.220865 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73a16802-fe2e-4de3-890d-afda78033632-catalog-content\") pod \"73a16802-fe2e-4de3-890d-afda78033632\" (UID: \"73a16802-fe2e-4de3-890d-afda78033632\") " Feb 20 00:23:23 crc kubenswrapper[5107]: I0220 00:23:23.220971 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73a16802-fe2e-4de3-890d-afda78033632-utilities\") pod \"73a16802-fe2e-4de3-890d-afda78033632\" (UID: \"73a16802-fe2e-4de3-890d-afda78033632\") " Feb 20 00:23:23 crc kubenswrapper[5107]: I0220 00:23:23.222281 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73a16802-fe2e-4de3-890d-afda78033632-utilities" (OuterVolumeSpecName: "utilities") pod "73a16802-fe2e-4de3-890d-afda78033632" (UID: "73a16802-fe2e-4de3-890d-afda78033632"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:23:23 crc kubenswrapper[5107]: I0220 00:23:23.232294 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73a16802-fe2e-4de3-890d-afda78033632-kube-api-access-7mm6j" (OuterVolumeSpecName: "kube-api-access-7mm6j") pod "73a16802-fe2e-4de3-890d-afda78033632" (UID: "73a16802-fe2e-4de3-890d-afda78033632"). InnerVolumeSpecName "kube-api-access-7mm6j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:23:23 crc kubenswrapper[5107]: I0220 00:23:23.274752 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73a16802-fe2e-4de3-890d-afda78033632-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "73a16802-fe2e-4de3-890d-afda78033632" (UID: "73a16802-fe2e-4de3-890d-afda78033632"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:23:23 crc kubenswrapper[5107]: I0220 00:23:23.322463 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7mm6j\" (UniqueName: \"kubernetes.io/projected/73a16802-fe2e-4de3-890d-afda78033632-kube-api-access-7mm6j\") on node \"crc\" DevicePath \"\"" Feb 20 00:23:23 crc kubenswrapper[5107]: I0220 00:23:23.322518 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73a16802-fe2e-4de3-890d-afda78033632-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:23:23 crc kubenswrapper[5107]: I0220 00:23:23.322535 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73a16802-fe2e-4de3-890d-afda78033632-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:23:23 crc kubenswrapper[5107]: I0220 00:23:23.711426 5107 generic.go:358] "Generic (PLEG): container finished" podID="73a16802-fe2e-4de3-890d-afda78033632" containerID="5256d4b64abe821125c1ffa7afd6e0aff0b31ad1e620ee04bc36e1dc6207d30a" exitCode=0 Feb 20 00:23:23 crc kubenswrapper[5107]: I0220 00:23:23.711569 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rsh8w" event={"ID":"73a16802-fe2e-4de3-890d-afda78033632","Type":"ContainerDied","Data":"5256d4b64abe821125c1ffa7afd6e0aff0b31ad1e620ee04bc36e1dc6207d30a"} Feb 20 00:23:23 crc kubenswrapper[5107]: I0220 00:23:23.711669 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rsh8w" event={"ID":"73a16802-fe2e-4de3-890d-afda78033632","Type":"ContainerDied","Data":"ad672f4fc800f138288615bb3104a319d7cf185ff16bc04f8cc8ebac69483022"} Feb 20 00:23:23 crc kubenswrapper[5107]: I0220 00:23:23.711704 5107 scope.go:117] "RemoveContainer" containerID="5256d4b64abe821125c1ffa7afd6e0aff0b31ad1e620ee04bc36e1dc6207d30a" Feb 20 00:23:23 crc kubenswrapper[5107]: I0220 00:23:23.713416 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rsh8w" Feb 20 00:23:23 crc kubenswrapper[5107]: I0220 00:23:23.733244 5107 scope.go:117] "RemoveContainer" containerID="e2e0fa6d18c6d3780e1a5f2c290388f7bb4a369ca3ceaaf7ed2a2fca33229c93" Feb 20 00:23:23 crc kubenswrapper[5107]: I0220 00:23:23.759790 5107 scope.go:117] "RemoveContainer" containerID="39e38fee5dfeeb6857abc30658406f434fd4ceb144c1dfb3a372919f3366082d" Feb 20 00:23:23 crc kubenswrapper[5107]: I0220 00:23:23.803036 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rsh8w"] Feb 20 00:23:23 crc kubenswrapper[5107]: I0220 00:23:23.809513 5107 scope.go:117] "RemoveContainer" containerID="5256d4b64abe821125c1ffa7afd6e0aff0b31ad1e620ee04bc36e1dc6207d30a" Feb 20 00:23:23 crc kubenswrapper[5107]: E0220 00:23:23.810298 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5256d4b64abe821125c1ffa7afd6e0aff0b31ad1e620ee04bc36e1dc6207d30a\": container with ID starting with 5256d4b64abe821125c1ffa7afd6e0aff0b31ad1e620ee04bc36e1dc6207d30a not found: ID does not exist" containerID="5256d4b64abe821125c1ffa7afd6e0aff0b31ad1e620ee04bc36e1dc6207d30a" Feb 20 00:23:23 crc kubenswrapper[5107]: I0220 00:23:23.810376 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5256d4b64abe821125c1ffa7afd6e0aff0b31ad1e620ee04bc36e1dc6207d30a"} err="failed to get container status \"5256d4b64abe821125c1ffa7afd6e0aff0b31ad1e620ee04bc36e1dc6207d30a\": rpc error: code = NotFound desc = could not find container \"5256d4b64abe821125c1ffa7afd6e0aff0b31ad1e620ee04bc36e1dc6207d30a\": container with ID starting with 5256d4b64abe821125c1ffa7afd6e0aff0b31ad1e620ee04bc36e1dc6207d30a not found: ID does not exist" Feb 20 00:23:23 crc kubenswrapper[5107]: I0220 00:23:23.810459 5107 scope.go:117] "RemoveContainer" containerID="e2e0fa6d18c6d3780e1a5f2c290388f7bb4a369ca3ceaaf7ed2a2fca33229c93" Feb 20 00:23:23 crc kubenswrapper[5107]: E0220 00:23:23.810982 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2e0fa6d18c6d3780e1a5f2c290388f7bb4a369ca3ceaaf7ed2a2fca33229c93\": container with ID starting with e2e0fa6d18c6d3780e1a5f2c290388f7bb4a369ca3ceaaf7ed2a2fca33229c93 not found: ID does not exist" containerID="e2e0fa6d18c6d3780e1a5f2c290388f7bb4a369ca3ceaaf7ed2a2fca33229c93" Feb 20 00:23:23 crc kubenswrapper[5107]: I0220 00:23:23.811222 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2e0fa6d18c6d3780e1a5f2c290388f7bb4a369ca3ceaaf7ed2a2fca33229c93"} err="failed to get container status \"e2e0fa6d18c6d3780e1a5f2c290388f7bb4a369ca3ceaaf7ed2a2fca33229c93\": rpc error: code = NotFound desc = could not find container \"e2e0fa6d18c6d3780e1a5f2c290388f7bb4a369ca3ceaaf7ed2a2fca33229c93\": container with ID starting with e2e0fa6d18c6d3780e1a5f2c290388f7bb4a369ca3ceaaf7ed2a2fca33229c93 not found: ID does not exist" Feb 20 00:23:23 crc kubenswrapper[5107]: I0220 00:23:23.811386 5107 scope.go:117] "RemoveContainer" containerID="39e38fee5dfeeb6857abc30658406f434fd4ceb144c1dfb3a372919f3366082d" Feb 20 00:23:23 crc kubenswrapper[5107]: E0220 00:23:23.811915 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39e38fee5dfeeb6857abc30658406f434fd4ceb144c1dfb3a372919f3366082d\": container with ID starting with 39e38fee5dfeeb6857abc30658406f434fd4ceb144c1dfb3a372919f3366082d not found: ID does not exist" containerID="39e38fee5dfeeb6857abc30658406f434fd4ceb144c1dfb3a372919f3366082d" Feb 20 00:23:23 crc kubenswrapper[5107]: I0220 00:23:23.811962 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39e38fee5dfeeb6857abc30658406f434fd4ceb144c1dfb3a372919f3366082d"} err="failed to get container status \"39e38fee5dfeeb6857abc30658406f434fd4ceb144c1dfb3a372919f3366082d\": rpc error: code = NotFound desc = could not find container \"39e38fee5dfeeb6857abc30658406f434fd4ceb144c1dfb3a372919f3366082d\": container with ID starting with 39e38fee5dfeeb6857abc30658406f434fd4ceb144c1dfb3a372919f3366082d not found: ID does not exist" Feb 20 00:23:23 crc kubenswrapper[5107]: I0220 00:23:23.816514 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rsh8w"] Feb 20 00:23:24 crc kubenswrapper[5107]: I0220 00:23:24.496739 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73a16802-fe2e-4de3-890d-afda78033632" path="/var/lib/kubelet/pods/73a16802-fe2e-4de3-890d-afda78033632/volumes" Feb 20 00:23:32 crc kubenswrapper[5107]: I0220 00:23:32.825019 5107 patch_prober.go:28] interesting pod/machine-config-daemon-5bqkx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:23:32 crc kubenswrapper[5107]: I0220 00:23:32.825666 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:23:34 crc kubenswrapper[5107]: I0220 00:23:34.110090 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xpmwv"] Feb 20 00:23:34 crc kubenswrapper[5107]: I0220 00:23:34.111735 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="73a16802-fe2e-4de3-890d-afda78033632" containerName="registry-server" Feb 20 00:23:34 crc kubenswrapper[5107]: I0220 00:23:34.111769 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="73a16802-fe2e-4de3-890d-afda78033632" containerName="registry-server" Feb 20 00:23:34 crc kubenswrapper[5107]: I0220 00:23:34.111821 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="73a16802-fe2e-4de3-890d-afda78033632" containerName="extract-content" Feb 20 00:23:34 crc kubenswrapper[5107]: I0220 00:23:34.111833 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="73a16802-fe2e-4de3-890d-afda78033632" containerName="extract-content" Feb 20 00:23:34 crc kubenswrapper[5107]: I0220 00:23:34.111864 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="73a16802-fe2e-4de3-890d-afda78033632" containerName="extract-utilities" Feb 20 00:23:34 crc kubenswrapper[5107]: I0220 00:23:34.111874 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="73a16802-fe2e-4de3-890d-afda78033632" containerName="extract-utilities" Feb 20 00:23:34 crc kubenswrapper[5107]: I0220 00:23:34.112034 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="73a16802-fe2e-4de3-890d-afda78033632" containerName="registry-server" Feb 20 00:23:34 crc kubenswrapper[5107]: I0220 00:23:34.120712 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xpmwv" Feb 20 00:23:34 crc kubenswrapper[5107]: I0220 00:23:34.127942 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xpmwv"] Feb 20 00:23:34 crc kubenswrapper[5107]: I0220 00:23:34.291067 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgzk4\" (UniqueName: \"kubernetes.io/projected/96892d63-ca44-45ce-a78c-abec0fb2d3bd-kube-api-access-pgzk4\") pod \"redhat-operators-xpmwv\" (UID: \"96892d63-ca44-45ce-a78c-abec0fb2d3bd\") " pod="openshift-marketplace/redhat-operators-xpmwv" Feb 20 00:23:34 crc kubenswrapper[5107]: I0220 00:23:34.291155 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96892d63-ca44-45ce-a78c-abec0fb2d3bd-utilities\") pod \"redhat-operators-xpmwv\" (UID: \"96892d63-ca44-45ce-a78c-abec0fb2d3bd\") " pod="openshift-marketplace/redhat-operators-xpmwv" Feb 20 00:23:34 crc kubenswrapper[5107]: I0220 00:23:34.291207 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96892d63-ca44-45ce-a78c-abec0fb2d3bd-catalog-content\") pod \"redhat-operators-xpmwv\" (UID: \"96892d63-ca44-45ce-a78c-abec0fb2d3bd\") " pod="openshift-marketplace/redhat-operators-xpmwv" Feb 20 00:23:34 crc kubenswrapper[5107]: I0220 00:23:34.392783 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pgzk4\" (UniqueName: \"kubernetes.io/projected/96892d63-ca44-45ce-a78c-abec0fb2d3bd-kube-api-access-pgzk4\") pod \"redhat-operators-xpmwv\" (UID: \"96892d63-ca44-45ce-a78c-abec0fb2d3bd\") " pod="openshift-marketplace/redhat-operators-xpmwv" Feb 20 00:23:34 crc kubenswrapper[5107]: I0220 00:23:34.393051 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96892d63-ca44-45ce-a78c-abec0fb2d3bd-utilities\") pod \"redhat-operators-xpmwv\" (UID: \"96892d63-ca44-45ce-a78c-abec0fb2d3bd\") " pod="openshift-marketplace/redhat-operators-xpmwv" Feb 20 00:23:34 crc kubenswrapper[5107]: I0220 00:23:34.393126 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96892d63-ca44-45ce-a78c-abec0fb2d3bd-catalog-content\") pod \"redhat-operators-xpmwv\" (UID: \"96892d63-ca44-45ce-a78c-abec0fb2d3bd\") " pod="openshift-marketplace/redhat-operators-xpmwv" Feb 20 00:23:34 crc kubenswrapper[5107]: I0220 00:23:34.393714 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96892d63-ca44-45ce-a78c-abec0fb2d3bd-catalog-content\") pod \"redhat-operators-xpmwv\" (UID: \"96892d63-ca44-45ce-a78c-abec0fb2d3bd\") " pod="openshift-marketplace/redhat-operators-xpmwv" Feb 20 00:23:34 crc kubenswrapper[5107]: I0220 00:23:34.393780 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96892d63-ca44-45ce-a78c-abec0fb2d3bd-utilities\") pod \"redhat-operators-xpmwv\" (UID: \"96892d63-ca44-45ce-a78c-abec0fb2d3bd\") " pod="openshift-marketplace/redhat-operators-xpmwv" Feb 20 00:23:34 crc kubenswrapper[5107]: I0220 00:23:34.418085 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgzk4\" (UniqueName: \"kubernetes.io/projected/96892d63-ca44-45ce-a78c-abec0fb2d3bd-kube-api-access-pgzk4\") pod \"redhat-operators-xpmwv\" (UID: \"96892d63-ca44-45ce-a78c-abec0fb2d3bd\") " pod="openshift-marketplace/redhat-operators-xpmwv" Feb 20 00:23:34 crc kubenswrapper[5107]: I0220 00:23:34.467956 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xpmwv" Feb 20 00:23:34 crc kubenswrapper[5107]: I0220 00:23:34.669536 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xpmwv"] Feb 20 00:23:34 crc kubenswrapper[5107]: W0220 00:23:34.686426 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96892d63_ca44_45ce_a78c_abec0fb2d3bd.slice/crio-8318f9fb382a3685f940d3a8a47f1bf415993fa74040d2b497ff9a757968c0f2 WatchSource:0}: Error finding container 8318f9fb382a3685f940d3a8a47f1bf415993fa74040d2b497ff9a757968c0f2: Status 404 returned error can't find the container with id 8318f9fb382a3685f940d3a8a47f1bf415993fa74040d2b497ff9a757968c0f2 Feb 20 00:23:34 crc kubenswrapper[5107]: I0220 00:23:34.808801 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xpmwv" event={"ID":"96892d63-ca44-45ce-a78c-abec0fb2d3bd","Type":"ContainerStarted","Data":"8318f9fb382a3685f940d3a8a47f1bf415993fa74040d2b497ff9a757968c0f2"} Feb 20 00:23:34 crc kubenswrapper[5107]: I0220 00:23:34.894852 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fnskd_c9d08e95-6328-4e97-aab4-4dd9913914cc/kube-multus/0.log" Feb 20 00:23:34 crc kubenswrapper[5107]: I0220 00:23:34.905883 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fnskd_c9d08e95-6328-4e97-aab4-4dd9913914cc/kube-multus/0.log" Feb 20 00:23:34 crc kubenswrapper[5107]: I0220 00:23:34.908965 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 20 00:23:34 crc kubenswrapper[5107]: I0220 00:23:34.914811 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 20 00:23:35 crc kubenswrapper[5107]: I0220 00:23:35.815880 5107 generic.go:358] "Generic (PLEG): container finished" podID="96892d63-ca44-45ce-a78c-abec0fb2d3bd" containerID="b81f02da376643db9e2f308dae9f605d6949e38551b52a8b40b23118ff5efd4f" exitCode=0 Feb 20 00:23:35 crc kubenswrapper[5107]: I0220 00:23:35.815934 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xpmwv" event={"ID":"96892d63-ca44-45ce-a78c-abec0fb2d3bd","Type":"ContainerDied","Data":"b81f02da376643db9e2f308dae9f605d6949e38551b52a8b40b23118ff5efd4f"} Feb 20 00:23:37 crc kubenswrapper[5107]: I0220 00:23:37.834473 5107 generic.go:358] "Generic (PLEG): container finished" podID="96892d63-ca44-45ce-a78c-abec0fb2d3bd" containerID="58fa9875ddb4bdfc7659eb7a60722792ffa7e7d4a4c88b9950d27af0a8cbf4f6" exitCode=0 Feb 20 00:23:37 crc kubenswrapper[5107]: I0220 00:23:37.834542 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xpmwv" event={"ID":"96892d63-ca44-45ce-a78c-abec0fb2d3bd","Type":"ContainerDied","Data":"58fa9875ddb4bdfc7659eb7a60722792ffa7e7d4a4c88b9950d27af0a8cbf4f6"} Feb 20 00:23:38 crc kubenswrapper[5107]: I0220 00:23:38.845132 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xpmwv" event={"ID":"96892d63-ca44-45ce-a78c-abec0fb2d3bd","Type":"ContainerStarted","Data":"0a4113214a68c970b91e8f5dde298d7978ef9bf2cdf2118b5623b9714b6a25b6"} Feb 20 00:23:41 crc kubenswrapper[5107]: I0220 00:23:41.294796 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xpmwv" podStartSLOduration=6.34975368 podStartE2EDuration="7.294772859s" podCreationTimestamp="2026-02-20 00:23:34 +0000 UTC" firstStartedPulling="2026-02-20 00:23:35.816920015 +0000 UTC m=+902.185577591" lastFinishedPulling="2026-02-20 00:23:36.761939194 +0000 UTC m=+903.130596770" observedRunningTime="2026-02-20 00:23:38.868493087 +0000 UTC m=+905.237150673" watchObservedRunningTime="2026-02-20 00:23:41.294772859 +0000 UTC m=+907.663430465" Feb 20 00:23:41 crc kubenswrapper[5107]: I0220 00:23:41.304788 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9rjhv"] Feb 20 00:23:41 crc kubenswrapper[5107]: I0220 00:23:41.397401 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9rjhv"] Feb 20 00:23:41 crc kubenswrapper[5107]: I0220 00:23:41.397577 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9rjhv" Feb 20 00:23:41 crc kubenswrapper[5107]: I0220 00:23:41.450931 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg9sk\" (UniqueName: \"kubernetes.io/projected/be5bdfba-f643-4d0c-9def-c849da707717-kube-api-access-zg9sk\") pod \"community-operators-9rjhv\" (UID: \"be5bdfba-f643-4d0c-9def-c849da707717\") " pod="openshift-marketplace/community-operators-9rjhv" Feb 20 00:23:41 crc kubenswrapper[5107]: I0220 00:23:41.451002 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be5bdfba-f643-4d0c-9def-c849da707717-utilities\") pod \"community-operators-9rjhv\" (UID: \"be5bdfba-f643-4d0c-9def-c849da707717\") " pod="openshift-marketplace/community-operators-9rjhv" Feb 20 00:23:41 crc kubenswrapper[5107]: I0220 00:23:41.451224 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be5bdfba-f643-4d0c-9def-c849da707717-catalog-content\") pod \"community-operators-9rjhv\" (UID: \"be5bdfba-f643-4d0c-9def-c849da707717\") " pod="openshift-marketplace/community-operators-9rjhv" Feb 20 00:23:41 crc kubenswrapper[5107]: I0220 00:23:41.553162 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be5bdfba-f643-4d0c-9def-c849da707717-catalog-content\") pod \"community-operators-9rjhv\" (UID: \"be5bdfba-f643-4d0c-9def-c849da707717\") " pod="openshift-marketplace/community-operators-9rjhv" Feb 20 00:23:41 crc kubenswrapper[5107]: I0220 00:23:41.553295 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zg9sk\" (UniqueName: \"kubernetes.io/projected/be5bdfba-f643-4d0c-9def-c849da707717-kube-api-access-zg9sk\") pod \"community-operators-9rjhv\" (UID: \"be5bdfba-f643-4d0c-9def-c849da707717\") " pod="openshift-marketplace/community-operators-9rjhv" Feb 20 00:23:41 crc kubenswrapper[5107]: I0220 00:23:41.553329 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be5bdfba-f643-4d0c-9def-c849da707717-utilities\") pod \"community-operators-9rjhv\" (UID: \"be5bdfba-f643-4d0c-9def-c849da707717\") " pod="openshift-marketplace/community-operators-9rjhv" Feb 20 00:23:41 crc kubenswrapper[5107]: I0220 00:23:41.553807 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be5bdfba-f643-4d0c-9def-c849da707717-catalog-content\") pod \"community-operators-9rjhv\" (UID: \"be5bdfba-f643-4d0c-9def-c849da707717\") " pod="openshift-marketplace/community-operators-9rjhv" Feb 20 00:23:41 crc kubenswrapper[5107]: I0220 00:23:41.553840 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be5bdfba-f643-4d0c-9def-c849da707717-utilities\") pod \"community-operators-9rjhv\" (UID: \"be5bdfba-f643-4d0c-9def-c849da707717\") " pod="openshift-marketplace/community-operators-9rjhv" Feb 20 00:23:41 crc kubenswrapper[5107]: I0220 00:23:41.581827 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg9sk\" (UniqueName: \"kubernetes.io/projected/be5bdfba-f643-4d0c-9def-c849da707717-kube-api-access-zg9sk\") pod \"community-operators-9rjhv\" (UID: \"be5bdfba-f643-4d0c-9def-c849da707717\") " pod="openshift-marketplace/community-operators-9rjhv" Feb 20 00:23:41 crc kubenswrapper[5107]: I0220 00:23:41.718223 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9rjhv" Feb 20 00:23:42 crc kubenswrapper[5107]: I0220 00:23:42.198699 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9rjhv"] Feb 20 00:23:42 crc kubenswrapper[5107]: W0220 00:23:42.209904 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe5bdfba_f643_4d0c_9def_c849da707717.slice/crio-1878a2d8d4a1a02a2f3f6037603c6da08539896503f82fe55784ab3b537a457f WatchSource:0}: Error finding container 1878a2d8d4a1a02a2f3f6037603c6da08539896503f82fe55784ab3b537a457f: Status 404 returned error can't find the container with id 1878a2d8d4a1a02a2f3f6037603c6da08539896503f82fe55784ab3b537a457f Feb 20 00:23:42 crc kubenswrapper[5107]: I0220 00:23:42.873484 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9rjhv" event={"ID":"be5bdfba-f643-4d0c-9def-c849da707717","Type":"ContainerStarted","Data":"57803d3b370dbd1c17c6047cc6cc95af7733f9883daa59435f840e50d5501e20"} Feb 20 00:23:42 crc kubenswrapper[5107]: I0220 00:23:42.873541 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9rjhv" event={"ID":"be5bdfba-f643-4d0c-9def-c849da707717","Type":"ContainerStarted","Data":"1878a2d8d4a1a02a2f3f6037603c6da08539896503f82fe55784ab3b537a457f"} Feb 20 00:23:44 crc kubenswrapper[5107]: I0220 00:23:44.468750 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xpmwv" Feb 20 00:23:44 crc kubenswrapper[5107]: I0220 00:23:44.468834 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-xpmwv" Feb 20 00:23:44 crc kubenswrapper[5107]: I0220 00:23:44.527348 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xpmwv" Feb 20 00:23:44 crc kubenswrapper[5107]: I0220 00:23:44.894877 5107 generic.go:358] "Generic (PLEG): container finished" podID="be5bdfba-f643-4d0c-9def-c849da707717" containerID="57803d3b370dbd1c17c6047cc6cc95af7733f9883daa59435f840e50d5501e20" exitCode=0 Feb 20 00:23:44 crc kubenswrapper[5107]: I0220 00:23:44.895011 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9rjhv" event={"ID":"be5bdfba-f643-4d0c-9def-c849da707717","Type":"ContainerDied","Data":"57803d3b370dbd1c17c6047cc6cc95af7733f9883daa59435f840e50d5501e20"} Feb 20 00:23:44 crc kubenswrapper[5107]: I0220 00:23:44.967897 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xpmwv" Feb 20 00:23:46 crc kubenswrapper[5107]: I0220 00:23:46.280197 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xpmwv"] Feb 20 00:23:46 crc kubenswrapper[5107]: I0220 00:23:46.915501 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xpmwv" podUID="96892d63-ca44-45ce-a78c-abec0fb2d3bd" containerName="registry-server" containerID="cri-o://0a4113214a68c970b91e8f5dde298d7978ef9bf2cdf2118b5623b9714b6a25b6" gracePeriod=2 Feb 20 00:23:47 crc kubenswrapper[5107]: I0220 00:23:47.933612 5107 generic.go:358] "Generic (PLEG): container finished" podID="96892d63-ca44-45ce-a78c-abec0fb2d3bd" containerID="0a4113214a68c970b91e8f5dde298d7978ef9bf2cdf2118b5623b9714b6a25b6" exitCode=0 Feb 20 00:23:47 crc kubenswrapper[5107]: I0220 00:23:47.933715 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xpmwv" event={"ID":"96892d63-ca44-45ce-a78c-abec0fb2d3bd","Type":"ContainerDied","Data":"0a4113214a68c970b91e8f5dde298d7978ef9bf2cdf2118b5623b9714b6a25b6"} Feb 20 00:23:48 crc kubenswrapper[5107]: I0220 00:23:48.048964 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xpmwv" Feb 20 00:23:48 crc kubenswrapper[5107]: I0220 00:23:48.092158 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgzk4\" (UniqueName: \"kubernetes.io/projected/96892d63-ca44-45ce-a78c-abec0fb2d3bd-kube-api-access-pgzk4\") pod \"96892d63-ca44-45ce-a78c-abec0fb2d3bd\" (UID: \"96892d63-ca44-45ce-a78c-abec0fb2d3bd\") " Feb 20 00:23:48 crc kubenswrapper[5107]: I0220 00:23:48.092242 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96892d63-ca44-45ce-a78c-abec0fb2d3bd-catalog-content\") pod \"96892d63-ca44-45ce-a78c-abec0fb2d3bd\" (UID: \"96892d63-ca44-45ce-a78c-abec0fb2d3bd\") " Feb 20 00:23:48 crc kubenswrapper[5107]: I0220 00:23:48.092326 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96892d63-ca44-45ce-a78c-abec0fb2d3bd-utilities\") pod \"96892d63-ca44-45ce-a78c-abec0fb2d3bd\" (UID: \"96892d63-ca44-45ce-a78c-abec0fb2d3bd\") " Feb 20 00:23:48 crc kubenswrapper[5107]: I0220 00:23:48.093412 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96892d63-ca44-45ce-a78c-abec0fb2d3bd-utilities" (OuterVolumeSpecName: "utilities") pod "96892d63-ca44-45ce-a78c-abec0fb2d3bd" (UID: "96892d63-ca44-45ce-a78c-abec0fb2d3bd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:23:48 crc kubenswrapper[5107]: I0220 00:23:48.109343 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96892d63-ca44-45ce-a78c-abec0fb2d3bd-kube-api-access-pgzk4" (OuterVolumeSpecName: "kube-api-access-pgzk4") pod "96892d63-ca44-45ce-a78c-abec0fb2d3bd" (UID: "96892d63-ca44-45ce-a78c-abec0fb2d3bd"). InnerVolumeSpecName "kube-api-access-pgzk4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:23:48 crc kubenswrapper[5107]: I0220 00:23:48.191150 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96892d63-ca44-45ce-a78c-abec0fb2d3bd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "96892d63-ca44-45ce-a78c-abec0fb2d3bd" (UID: "96892d63-ca44-45ce-a78c-abec0fb2d3bd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:23:48 crc kubenswrapper[5107]: I0220 00:23:48.193231 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96892d63-ca44-45ce-a78c-abec0fb2d3bd-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:23:48 crc kubenswrapper[5107]: I0220 00:23:48.193253 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgzk4\" (UniqueName: \"kubernetes.io/projected/96892d63-ca44-45ce-a78c-abec0fb2d3bd-kube-api-access-pgzk4\") on node \"crc\" DevicePath \"\"" Feb 20 00:23:48 crc kubenswrapper[5107]: I0220 00:23:48.193264 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96892d63-ca44-45ce-a78c-abec0fb2d3bd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:23:48 crc kubenswrapper[5107]: I0220 00:23:48.949047 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xpmwv" event={"ID":"96892d63-ca44-45ce-a78c-abec0fb2d3bd","Type":"ContainerDied","Data":"8318f9fb382a3685f940d3a8a47f1bf415993fa74040d2b497ff9a757968c0f2"} Feb 20 00:23:48 crc kubenswrapper[5107]: I0220 00:23:48.949497 5107 scope.go:117] "RemoveContainer" containerID="0a4113214a68c970b91e8f5dde298d7978ef9bf2cdf2118b5623b9714b6a25b6" Feb 20 00:23:48 crc kubenswrapper[5107]: I0220 00:23:48.949128 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xpmwv" Feb 20 00:23:48 crc kubenswrapper[5107]: I0220 00:23:48.976981 5107 scope.go:117] "RemoveContainer" containerID="58fa9875ddb4bdfc7659eb7a60722792ffa7e7d4a4c88b9950d27af0a8cbf4f6" Feb 20 00:23:48 crc kubenswrapper[5107]: I0220 00:23:48.979480 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xpmwv"] Feb 20 00:23:48 crc kubenswrapper[5107]: I0220 00:23:48.986545 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xpmwv"] Feb 20 00:23:49 crc kubenswrapper[5107]: I0220 00:23:49.006303 5107 scope.go:117] "RemoveContainer" containerID="b81f02da376643db9e2f308dae9f605d6949e38551b52a8b40b23118ff5efd4f" Feb 20 00:23:49 crc kubenswrapper[5107]: I0220 00:23:49.961007 5107 generic.go:358] "Generic (PLEG): container finished" podID="be5bdfba-f643-4d0c-9def-c849da707717" containerID="9fbf3c21a7e6a9d6685351af22cbf4a7e69d5dab8c73c438ff7c35a91936dec9" exitCode=0 Feb 20 00:23:49 crc kubenswrapper[5107]: I0220 00:23:49.961166 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9rjhv" event={"ID":"be5bdfba-f643-4d0c-9def-c849da707717","Type":"ContainerDied","Data":"9fbf3c21a7e6a9d6685351af22cbf4a7e69d5dab8c73c438ff7c35a91936dec9"} Feb 20 00:23:50 crc kubenswrapper[5107]: I0220 00:23:50.493859 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96892d63-ca44-45ce-a78c-abec0fb2d3bd" path="/var/lib/kubelet/pods/96892d63-ca44-45ce-a78c-abec0fb2d3bd/volumes" Feb 20 00:23:50 crc kubenswrapper[5107]: I0220 00:23:50.971717 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9rjhv" event={"ID":"be5bdfba-f643-4d0c-9def-c849da707717","Type":"ContainerStarted","Data":"60638fb9baf9ce0e04a56e9146457a41eda996c190323016874952115d826307"} Feb 20 00:23:50 crc kubenswrapper[5107]: I0220 00:23:50.989379 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9rjhv" podStartSLOduration=6.025463286 podStartE2EDuration="9.989363497s" podCreationTimestamp="2026-02-20 00:23:41 +0000 UTC" firstStartedPulling="2026-02-20 00:23:44.897569806 +0000 UTC m=+911.266227402" lastFinishedPulling="2026-02-20 00:23:48.861470037 +0000 UTC m=+915.230127613" observedRunningTime="2026-02-20 00:23:50.98733394 +0000 UTC m=+917.355991506" watchObservedRunningTime="2026-02-20 00:23:50.989363497 +0000 UTC m=+917.358021063" Feb 20 00:23:51 crc kubenswrapper[5107]: I0220 00:23:51.718792 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-9rjhv" Feb 20 00:23:51 crc kubenswrapper[5107]: I0220 00:23:51.719193 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9rjhv" Feb 20 00:23:52 crc kubenswrapper[5107]: I0220 00:23:52.756908 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-9rjhv" podUID="be5bdfba-f643-4d0c-9def-c849da707717" containerName="registry-server" probeResult="failure" output=< Feb 20 00:23:52 crc kubenswrapper[5107]: timeout: failed to connect service ":50051" within 1s Feb 20 00:23:52 crc kubenswrapper[5107]: > Feb 20 00:24:00 crc kubenswrapper[5107]: I0220 00:24:00.149008 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29525784-prptk"] Feb 20 00:24:00 crc kubenswrapper[5107]: I0220 00:24:00.150182 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="96892d63-ca44-45ce-a78c-abec0fb2d3bd" containerName="registry-server" Feb 20 00:24:00 crc kubenswrapper[5107]: I0220 00:24:00.150195 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="96892d63-ca44-45ce-a78c-abec0fb2d3bd" containerName="registry-server" Feb 20 00:24:00 crc kubenswrapper[5107]: I0220 00:24:00.150226 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="96892d63-ca44-45ce-a78c-abec0fb2d3bd" containerName="extract-utilities" Feb 20 00:24:00 crc kubenswrapper[5107]: I0220 00:24:00.150232 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="96892d63-ca44-45ce-a78c-abec0fb2d3bd" containerName="extract-utilities" Feb 20 00:24:00 crc kubenswrapper[5107]: I0220 00:24:00.150238 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="96892d63-ca44-45ce-a78c-abec0fb2d3bd" containerName="extract-content" Feb 20 00:24:00 crc kubenswrapper[5107]: I0220 00:24:00.150242 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="96892d63-ca44-45ce-a78c-abec0fb2d3bd" containerName="extract-content" Feb 20 00:24:00 crc kubenswrapper[5107]: I0220 00:24:00.150339 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="96892d63-ca44-45ce-a78c-abec0fb2d3bd" containerName="registry-server" Feb 20 00:24:00 crc kubenswrapper[5107]: I0220 00:24:00.159902 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525784-prptk" Feb 20 00:24:00 crc kubenswrapper[5107]: I0220 00:24:00.161997 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525784-prptk"] Feb 20 00:24:00 crc kubenswrapper[5107]: I0220 00:24:00.162391 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 20 00:24:00 crc kubenswrapper[5107]: I0220 00:24:00.163384 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 20 00:24:00 crc kubenswrapper[5107]: I0220 00:24:00.163908 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-km7dp\"" Feb 20 00:24:00 crc kubenswrapper[5107]: I0220 00:24:00.266261 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tlsb\" (UniqueName: \"kubernetes.io/projected/94ab841c-7137-4098-ab2a-00c93ae365cd-kube-api-access-5tlsb\") pod \"auto-csr-approver-29525784-prptk\" (UID: \"94ab841c-7137-4098-ab2a-00c93ae365cd\") " pod="openshift-infra/auto-csr-approver-29525784-prptk" Feb 20 00:24:00 crc kubenswrapper[5107]: I0220 00:24:00.367723 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5tlsb\" (UniqueName: \"kubernetes.io/projected/94ab841c-7137-4098-ab2a-00c93ae365cd-kube-api-access-5tlsb\") pod \"auto-csr-approver-29525784-prptk\" (UID: \"94ab841c-7137-4098-ab2a-00c93ae365cd\") " pod="openshift-infra/auto-csr-approver-29525784-prptk" Feb 20 00:24:00 crc kubenswrapper[5107]: I0220 00:24:00.394389 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tlsb\" (UniqueName: \"kubernetes.io/projected/94ab841c-7137-4098-ab2a-00c93ae365cd-kube-api-access-5tlsb\") pod \"auto-csr-approver-29525784-prptk\" (UID: \"94ab841c-7137-4098-ab2a-00c93ae365cd\") " pod="openshift-infra/auto-csr-approver-29525784-prptk" Feb 20 00:24:00 crc kubenswrapper[5107]: I0220 00:24:00.482268 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525784-prptk" Feb 20 00:24:00 crc kubenswrapper[5107]: I0220 00:24:00.704947 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525784-prptk"] Feb 20 00:24:01 crc kubenswrapper[5107]: I0220 00:24:01.045276 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525784-prptk" event={"ID":"94ab841c-7137-4098-ab2a-00c93ae365cd","Type":"ContainerStarted","Data":"33b4f7db9f0c29de6e783ab2af5d42da109586540043b20a8678bea1ad32d4e5"} Feb 20 00:24:01 crc kubenswrapper[5107]: I0220 00:24:01.768656 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9rjhv" Feb 20 00:24:01 crc kubenswrapper[5107]: I0220 00:24:01.834096 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9rjhv" Feb 20 00:24:02 crc kubenswrapper[5107]: I0220 00:24:02.000894 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9rjhv"] Feb 20 00:24:02 crc kubenswrapper[5107]: I0220 00:24:02.057966 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525784-prptk" event={"ID":"94ab841c-7137-4098-ab2a-00c93ae365cd","Type":"ContainerStarted","Data":"e308a80624a3e0b0f1358657c05a9c082ed9eed8e8c19635a105ea6c96f9be73"} Feb 20 00:24:02 crc kubenswrapper[5107]: I0220 00:24:02.078172 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29525784-prptk" podStartSLOduration=1.245410085 podStartE2EDuration="2.078156036s" podCreationTimestamp="2026-02-20 00:24:00 +0000 UTC" firstStartedPulling="2026-02-20 00:24:00.706184447 +0000 UTC m=+927.074842013" lastFinishedPulling="2026-02-20 00:24:01.538930388 +0000 UTC m=+927.907587964" observedRunningTime="2026-02-20 00:24:02.077477897 +0000 UTC m=+928.446135453" watchObservedRunningTime="2026-02-20 00:24:02.078156036 +0000 UTC m=+928.446813602" Feb 20 00:24:02 crc kubenswrapper[5107]: I0220 00:24:02.824948 5107 patch_prober.go:28] interesting pod/machine-config-daemon-5bqkx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:24:02 crc kubenswrapper[5107]: I0220 00:24:02.825704 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:24:02 crc kubenswrapper[5107]: I0220 00:24:02.825842 5107 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" Feb 20 00:24:02 crc kubenswrapper[5107]: I0220 00:24:02.826561 5107 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4236a7d7e51c71204dad21d268dc89276f97ace5da6c6fcbe95fa8e36b47948f"} pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 20 00:24:02 crc kubenswrapper[5107]: I0220 00:24:02.826757 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" containerName="machine-config-daemon" containerID="cri-o://4236a7d7e51c71204dad21d268dc89276f97ace5da6c6fcbe95fa8e36b47948f" gracePeriod=600 Feb 20 00:24:03 crc kubenswrapper[5107]: I0220 00:24:03.068887 5107 generic.go:358] "Generic (PLEG): container finished" podID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" containerID="4236a7d7e51c71204dad21d268dc89276f97ace5da6c6fcbe95fa8e36b47948f" exitCode=0 Feb 20 00:24:03 crc kubenswrapper[5107]: I0220 00:24:03.068935 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" event={"ID":"2a8cc693-438e-4d3b-8865-7d3907f9dc78","Type":"ContainerDied","Data":"4236a7d7e51c71204dad21d268dc89276f97ace5da6c6fcbe95fa8e36b47948f"} Feb 20 00:24:03 crc kubenswrapper[5107]: I0220 00:24:03.069255 5107 scope.go:117] "RemoveContainer" containerID="bb20a5be1ae88e4e4d0571e4849fdfa6beddb51816cd34f7807146b41b9e36ee" Feb 20 00:24:03 crc kubenswrapper[5107]: I0220 00:24:03.071265 5107 generic.go:358] "Generic (PLEG): container finished" podID="94ab841c-7137-4098-ab2a-00c93ae365cd" containerID="e308a80624a3e0b0f1358657c05a9c082ed9eed8e8c19635a105ea6c96f9be73" exitCode=0 Feb 20 00:24:03 crc kubenswrapper[5107]: I0220 00:24:03.072474 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9rjhv" podUID="be5bdfba-f643-4d0c-9def-c849da707717" containerName="registry-server" containerID="cri-o://60638fb9baf9ce0e04a56e9146457a41eda996c190323016874952115d826307" gracePeriod=2 Feb 20 00:24:03 crc kubenswrapper[5107]: I0220 00:24:03.071421 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525784-prptk" event={"ID":"94ab841c-7137-4098-ab2a-00c93ae365cd","Type":"ContainerDied","Data":"e308a80624a3e0b0f1358657c05a9c082ed9eed8e8c19635a105ea6c96f9be73"} Feb 20 00:24:03 crc kubenswrapper[5107]: I0220 00:24:03.437164 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9rjhv" Feb 20 00:24:03 crc kubenswrapper[5107]: I0220 00:24:03.511744 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be5bdfba-f643-4d0c-9def-c849da707717-catalog-content\") pod \"be5bdfba-f643-4d0c-9def-c849da707717\" (UID: \"be5bdfba-f643-4d0c-9def-c849da707717\") " Feb 20 00:24:03 crc kubenswrapper[5107]: I0220 00:24:03.511858 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be5bdfba-f643-4d0c-9def-c849da707717-utilities\") pod \"be5bdfba-f643-4d0c-9def-c849da707717\" (UID: \"be5bdfba-f643-4d0c-9def-c849da707717\") " Feb 20 00:24:03 crc kubenswrapper[5107]: I0220 00:24:03.511978 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg9sk\" (UniqueName: \"kubernetes.io/projected/be5bdfba-f643-4d0c-9def-c849da707717-kube-api-access-zg9sk\") pod \"be5bdfba-f643-4d0c-9def-c849da707717\" (UID: \"be5bdfba-f643-4d0c-9def-c849da707717\") " Feb 20 00:24:03 crc kubenswrapper[5107]: I0220 00:24:03.512813 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be5bdfba-f643-4d0c-9def-c849da707717-utilities" (OuterVolumeSpecName: "utilities") pod "be5bdfba-f643-4d0c-9def-c849da707717" (UID: "be5bdfba-f643-4d0c-9def-c849da707717"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:24:03 crc kubenswrapper[5107]: I0220 00:24:03.517479 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be5bdfba-f643-4d0c-9def-c849da707717-kube-api-access-zg9sk" (OuterVolumeSpecName: "kube-api-access-zg9sk") pod "be5bdfba-f643-4d0c-9def-c849da707717" (UID: "be5bdfba-f643-4d0c-9def-c849da707717"). InnerVolumeSpecName "kube-api-access-zg9sk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:24:03 crc kubenswrapper[5107]: I0220 00:24:03.562296 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be5bdfba-f643-4d0c-9def-c849da707717-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "be5bdfba-f643-4d0c-9def-c849da707717" (UID: "be5bdfba-f643-4d0c-9def-c849da707717"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:24:03 crc kubenswrapper[5107]: I0220 00:24:03.613051 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be5bdfba-f643-4d0c-9def-c849da707717-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:24:03 crc kubenswrapper[5107]: I0220 00:24:03.613076 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be5bdfba-f643-4d0c-9def-c849da707717-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:24:03 crc kubenswrapper[5107]: I0220 00:24:03.613085 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg9sk\" (UniqueName: \"kubernetes.io/projected/be5bdfba-f643-4d0c-9def-c849da707717-kube-api-access-zg9sk\") on node \"crc\" DevicePath \"\"" Feb 20 00:24:04 crc kubenswrapper[5107]: I0220 00:24:04.081521 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" event={"ID":"2a8cc693-438e-4d3b-8865-7d3907f9dc78","Type":"ContainerStarted","Data":"0f2d99740a54c1fb08d085d8cf733c3e7a30e596f0b9915e84a2b3b54f15c179"} Feb 20 00:24:04 crc kubenswrapper[5107]: I0220 00:24:04.084458 5107 generic.go:358] "Generic (PLEG): container finished" podID="be5bdfba-f643-4d0c-9def-c849da707717" containerID="60638fb9baf9ce0e04a56e9146457a41eda996c190323016874952115d826307" exitCode=0 Feb 20 00:24:04 crc kubenswrapper[5107]: I0220 00:24:04.084621 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9rjhv" Feb 20 00:24:04 crc kubenswrapper[5107]: I0220 00:24:04.084498 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9rjhv" event={"ID":"be5bdfba-f643-4d0c-9def-c849da707717","Type":"ContainerDied","Data":"60638fb9baf9ce0e04a56e9146457a41eda996c190323016874952115d826307"} Feb 20 00:24:04 crc kubenswrapper[5107]: I0220 00:24:04.084766 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9rjhv" event={"ID":"be5bdfba-f643-4d0c-9def-c849da707717","Type":"ContainerDied","Data":"1878a2d8d4a1a02a2f3f6037603c6da08539896503f82fe55784ab3b537a457f"} Feb 20 00:24:04 crc kubenswrapper[5107]: I0220 00:24:04.084802 5107 scope.go:117] "RemoveContainer" containerID="60638fb9baf9ce0e04a56e9146457a41eda996c190323016874952115d826307" Feb 20 00:24:04 crc kubenswrapper[5107]: I0220 00:24:04.122260 5107 scope.go:117] "RemoveContainer" containerID="9fbf3c21a7e6a9d6685351af22cbf4a7e69d5dab8c73c438ff7c35a91936dec9" Feb 20 00:24:04 crc kubenswrapper[5107]: I0220 00:24:04.129863 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9rjhv"] Feb 20 00:24:04 crc kubenswrapper[5107]: I0220 00:24:04.135350 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9rjhv"] Feb 20 00:24:04 crc kubenswrapper[5107]: I0220 00:24:04.162866 5107 scope.go:117] "RemoveContainer" containerID="57803d3b370dbd1c17c6047cc6cc95af7733f9883daa59435f840e50d5501e20" Feb 20 00:24:04 crc kubenswrapper[5107]: I0220 00:24:04.185962 5107 scope.go:117] "RemoveContainer" containerID="60638fb9baf9ce0e04a56e9146457a41eda996c190323016874952115d826307" Feb 20 00:24:04 crc kubenswrapper[5107]: E0220 00:24:04.186588 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60638fb9baf9ce0e04a56e9146457a41eda996c190323016874952115d826307\": container with ID starting with 60638fb9baf9ce0e04a56e9146457a41eda996c190323016874952115d826307 not found: ID does not exist" containerID="60638fb9baf9ce0e04a56e9146457a41eda996c190323016874952115d826307" Feb 20 00:24:04 crc kubenswrapper[5107]: I0220 00:24:04.186640 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60638fb9baf9ce0e04a56e9146457a41eda996c190323016874952115d826307"} err="failed to get container status \"60638fb9baf9ce0e04a56e9146457a41eda996c190323016874952115d826307\": rpc error: code = NotFound desc = could not find container \"60638fb9baf9ce0e04a56e9146457a41eda996c190323016874952115d826307\": container with ID starting with 60638fb9baf9ce0e04a56e9146457a41eda996c190323016874952115d826307 not found: ID does not exist" Feb 20 00:24:04 crc kubenswrapper[5107]: I0220 00:24:04.186671 5107 scope.go:117] "RemoveContainer" containerID="9fbf3c21a7e6a9d6685351af22cbf4a7e69d5dab8c73c438ff7c35a91936dec9" Feb 20 00:24:04 crc kubenswrapper[5107]: E0220 00:24:04.187239 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fbf3c21a7e6a9d6685351af22cbf4a7e69d5dab8c73c438ff7c35a91936dec9\": container with ID starting with 9fbf3c21a7e6a9d6685351af22cbf4a7e69d5dab8c73c438ff7c35a91936dec9 not found: ID does not exist" containerID="9fbf3c21a7e6a9d6685351af22cbf4a7e69d5dab8c73c438ff7c35a91936dec9" Feb 20 00:24:04 crc kubenswrapper[5107]: I0220 00:24:04.187286 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fbf3c21a7e6a9d6685351af22cbf4a7e69d5dab8c73c438ff7c35a91936dec9"} err="failed to get container status \"9fbf3c21a7e6a9d6685351af22cbf4a7e69d5dab8c73c438ff7c35a91936dec9\": rpc error: code = NotFound desc = could not find container \"9fbf3c21a7e6a9d6685351af22cbf4a7e69d5dab8c73c438ff7c35a91936dec9\": container with ID starting with 9fbf3c21a7e6a9d6685351af22cbf4a7e69d5dab8c73c438ff7c35a91936dec9 not found: ID does not exist" Feb 20 00:24:04 crc kubenswrapper[5107]: I0220 00:24:04.187320 5107 scope.go:117] "RemoveContainer" containerID="57803d3b370dbd1c17c6047cc6cc95af7733f9883daa59435f840e50d5501e20" Feb 20 00:24:04 crc kubenswrapper[5107]: E0220 00:24:04.187704 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57803d3b370dbd1c17c6047cc6cc95af7733f9883daa59435f840e50d5501e20\": container with ID starting with 57803d3b370dbd1c17c6047cc6cc95af7733f9883daa59435f840e50d5501e20 not found: ID does not exist" containerID="57803d3b370dbd1c17c6047cc6cc95af7733f9883daa59435f840e50d5501e20" Feb 20 00:24:04 crc kubenswrapper[5107]: I0220 00:24:04.187735 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57803d3b370dbd1c17c6047cc6cc95af7733f9883daa59435f840e50d5501e20"} err="failed to get container status \"57803d3b370dbd1c17c6047cc6cc95af7733f9883daa59435f840e50d5501e20\": rpc error: code = NotFound desc = could not find container \"57803d3b370dbd1c17c6047cc6cc95af7733f9883daa59435f840e50d5501e20\": container with ID starting with 57803d3b370dbd1c17c6047cc6cc95af7733f9883daa59435f840e50d5501e20 not found: ID does not exist" Feb 20 00:24:04 crc kubenswrapper[5107]: I0220 00:24:04.420100 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525784-prptk" Feb 20 00:24:04 crc kubenswrapper[5107]: I0220 00:24:04.498271 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be5bdfba-f643-4d0c-9def-c849da707717" path="/var/lib/kubelet/pods/be5bdfba-f643-4d0c-9def-c849da707717/volumes" Feb 20 00:24:04 crc kubenswrapper[5107]: I0220 00:24:04.532770 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tlsb\" (UniqueName: \"kubernetes.io/projected/94ab841c-7137-4098-ab2a-00c93ae365cd-kube-api-access-5tlsb\") pod \"94ab841c-7137-4098-ab2a-00c93ae365cd\" (UID: \"94ab841c-7137-4098-ab2a-00c93ae365cd\") " Feb 20 00:24:04 crc kubenswrapper[5107]: I0220 00:24:04.538986 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94ab841c-7137-4098-ab2a-00c93ae365cd-kube-api-access-5tlsb" (OuterVolumeSpecName: "kube-api-access-5tlsb") pod "94ab841c-7137-4098-ab2a-00c93ae365cd" (UID: "94ab841c-7137-4098-ab2a-00c93ae365cd"). InnerVolumeSpecName "kube-api-access-5tlsb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:24:04 crc kubenswrapper[5107]: I0220 00:24:04.635200 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5tlsb\" (UniqueName: \"kubernetes.io/projected/94ab841c-7137-4098-ab2a-00c93ae365cd-kube-api-access-5tlsb\") on node \"crc\" DevicePath \"\"" Feb 20 00:24:05 crc kubenswrapper[5107]: I0220 00:24:05.099499 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525784-prptk" Feb 20 00:24:05 crc kubenswrapper[5107]: I0220 00:24:05.099545 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525784-prptk" event={"ID":"94ab841c-7137-4098-ab2a-00c93ae365cd","Type":"ContainerDied","Data":"33b4f7db9f0c29de6e783ab2af5d42da109586540043b20a8678bea1ad32d4e5"} Feb 20 00:24:05 crc kubenswrapper[5107]: I0220 00:24:05.099608 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33b4f7db9f0c29de6e783ab2af5d42da109586540043b20a8678bea1ad32d4e5" Feb 20 00:24:05 crc kubenswrapper[5107]: I0220 00:24:05.156201 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29525778-2l5s2"] Feb 20 00:24:05 crc kubenswrapper[5107]: I0220 00:24:05.162535 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29525778-2l5s2"] Feb 20 00:24:06 crc kubenswrapper[5107]: I0220 00:24:06.495957 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4d3402d-9de0-434f-9a28-c032250d9161" path="/var/lib/kubelet/pods/b4d3402d-9de0-434f-9a28-c032250d9161/volumes" Feb 20 00:24:11 crc kubenswrapper[5107]: I0220 00:24:11.149444 5107 generic.go:358] "Generic (PLEG): container finished" podID="102ebadc-175f-4435-957c-5b47a0368962" containerID="67f22d6d927f4b7abd8d526b4d423909a7fbc06c78fdc64fafb744a5ee6d1460" exitCode=0 Feb 20 00:24:11 crc kubenswrapper[5107]: I0220 00:24:11.149517 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"102ebadc-175f-4435-957c-5b47a0368962","Type":"ContainerDied","Data":"67f22d6d927f4b7abd8d526b4d423909a7fbc06c78fdc64fafb744a5ee6d1460"} Feb 20 00:24:12 crc kubenswrapper[5107]: I0220 00:24:12.589352 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Feb 20 00:24:12 crc kubenswrapper[5107]: I0220 00:24:12.658043 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/102ebadc-175f-4435-957c-5b47a0368962-buildcachedir\") pod \"102ebadc-175f-4435-957c-5b47a0368962\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " Feb 20 00:24:12 crc kubenswrapper[5107]: I0220 00:24:12.658113 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/102ebadc-175f-4435-957c-5b47a0368962-buildworkdir\") pod \"102ebadc-175f-4435-957c-5b47a0368962\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " Feb 20 00:24:12 crc kubenswrapper[5107]: I0220 00:24:12.658182 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/102ebadc-175f-4435-957c-5b47a0368962-build-ca-bundles\") pod \"102ebadc-175f-4435-957c-5b47a0368962\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " Feb 20 00:24:12 crc kubenswrapper[5107]: I0220 00:24:12.658206 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/102ebadc-175f-4435-957c-5b47a0368962-node-pullsecrets\") pod \"102ebadc-175f-4435-957c-5b47a0368962\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " Feb 20 00:24:12 crc kubenswrapper[5107]: I0220 00:24:12.658241 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/102ebadc-175f-4435-957c-5b47a0368962-build-system-configs\") pod \"102ebadc-175f-4435-957c-5b47a0368962\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " Feb 20 00:24:12 crc kubenswrapper[5107]: I0220 00:24:12.658271 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/102ebadc-175f-4435-957c-5b47a0368962-container-storage-run\") pod \"102ebadc-175f-4435-957c-5b47a0368962\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " Feb 20 00:24:12 crc kubenswrapper[5107]: I0220 00:24:12.658307 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/102ebadc-175f-4435-957c-5b47a0368962-build-blob-cache\") pod \"102ebadc-175f-4435-957c-5b47a0368962\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " Feb 20 00:24:12 crc kubenswrapper[5107]: I0220 00:24:12.658375 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/102ebadc-175f-4435-957c-5b47a0368962-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "102ebadc-175f-4435-957c-5b47a0368962" (UID: "102ebadc-175f-4435-957c-5b47a0368962"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:24:12 crc kubenswrapper[5107]: I0220 00:24:12.658406 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/102ebadc-175f-4435-957c-5b47a0368962-builder-dockercfg-56v9d-push\") pod \"102ebadc-175f-4435-957c-5b47a0368962\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " Feb 20 00:24:12 crc kubenswrapper[5107]: I0220 00:24:12.658500 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/102ebadc-175f-4435-957c-5b47a0368962-container-storage-root\") pod \"102ebadc-175f-4435-957c-5b47a0368962\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " Feb 20 00:24:12 crc kubenswrapper[5107]: I0220 00:24:12.658533 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/102ebadc-175f-4435-957c-5b47a0368962-build-proxy-ca-bundles\") pod \"102ebadc-175f-4435-957c-5b47a0368962\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " Feb 20 00:24:12 crc kubenswrapper[5107]: I0220 00:24:12.658616 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/102ebadc-175f-4435-957c-5b47a0368962-builder-dockercfg-56v9d-pull\") pod \"102ebadc-175f-4435-957c-5b47a0368962\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " Feb 20 00:24:12 crc kubenswrapper[5107]: I0220 00:24:12.658681 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxzzg\" (UniqueName: \"kubernetes.io/projected/102ebadc-175f-4435-957c-5b47a0368962-kube-api-access-wxzzg\") pod \"102ebadc-175f-4435-957c-5b47a0368962\" (UID: \"102ebadc-175f-4435-957c-5b47a0368962\") " Feb 20 00:24:12 crc kubenswrapper[5107]: I0220 00:24:12.658980 5107 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/102ebadc-175f-4435-957c-5b47a0368962-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 20 00:24:12 crc kubenswrapper[5107]: I0220 00:24:12.659637 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/102ebadc-175f-4435-957c-5b47a0368962-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "102ebadc-175f-4435-957c-5b47a0368962" (UID: "102ebadc-175f-4435-957c-5b47a0368962"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:24:12 crc kubenswrapper[5107]: I0220 00:24:12.660546 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/102ebadc-175f-4435-957c-5b47a0368962-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "102ebadc-175f-4435-957c-5b47a0368962" (UID: "102ebadc-175f-4435-957c-5b47a0368962"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:24:12 crc kubenswrapper[5107]: I0220 00:24:12.660569 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/102ebadc-175f-4435-957c-5b47a0368962-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "102ebadc-175f-4435-957c-5b47a0368962" (UID: "102ebadc-175f-4435-957c-5b47a0368962"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:24:12 crc kubenswrapper[5107]: I0220 00:24:12.661784 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/102ebadc-175f-4435-957c-5b47a0368962-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "102ebadc-175f-4435-957c-5b47a0368962" (UID: "102ebadc-175f-4435-957c-5b47a0368962"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:24:12 crc kubenswrapper[5107]: I0220 00:24:12.665974 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/102ebadc-175f-4435-957c-5b47a0368962-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "102ebadc-175f-4435-957c-5b47a0368962" (UID: "102ebadc-175f-4435-957c-5b47a0368962"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:24:12 crc kubenswrapper[5107]: I0220 00:24:12.666519 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/102ebadc-175f-4435-957c-5b47a0368962-builder-dockercfg-56v9d-pull" (OuterVolumeSpecName: "builder-dockercfg-56v9d-pull") pod "102ebadc-175f-4435-957c-5b47a0368962" (UID: "102ebadc-175f-4435-957c-5b47a0368962"). InnerVolumeSpecName "builder-dockercfg-56v9d-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:24:12 crc kubenswrapper[5107]: I0220 00:24:12.666628 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/102ebadc-175f-4435-957c-5b47a0368962-kube-api-access-wxzzg" (OuterVolumeSpecName: "kube-api-access-wxzzg") pod "102ebadc-175f-4435-957c-5b47a0368962" (UID: "102ebadc-175f-4435-957c-5b47a0368962"). InnerVolumeSpecName "kube-api-access-wxzzg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:24:12 crc kubenswrapper[5107]: I0220 00:24:12.667087 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/102ebadc-175f-4435-957c-5b47a0368962-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "102ebadc-175f-4435-957c-5b47a0368962" (UID: "102ebadc-175f-4435-957c-5b47a0368962"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:24:12 crc kubenswrapper[5107]: I0220 00:24:12.673784 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/102ebadc-175f-4435-957c-5b47a0368962-builder-dockercfg-56v9d-push" (OuterVolumeSpecName: "builder-dockercfg-56v9d-push") pod "102ebadc-175f-4435-957c-5b47a0368962" (UID: "102ebadc-175f-4435-957c-5b47a0368962"). InnerVolumeSpecName "builder-dockercfg-56v9d-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:24:12 crc kubenswrapper[5107]: I0220 00:24:12.760480 5107 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/102ebadc-175f-4435-957c-5b47a0368962-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 20 00:24:12 crc kubenswrapper[5107]: I0220 00:24:12.760518 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/102ebadc-175f-4435-957c-5b47a0368962-builder-dockercfg-56v9d-pull\") on node \"crc\" DevicePath \"\"" Feb 20 00:24:12 crc kubenswrapper[5107]: I0220 00:24:12.760530 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wxzzg\" (UniqueName: \"kubernetes.io/projected/102ebadc-175f-4435-957c-5b47a0368962-kube-api-access-wxzzg\") on node \"crc\" DevicePath \"\"" Feb 20 00:24:12 crc kubenswrapper[5107]: I0220 00:24:12.760543 5107 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/102ebadc-175f-4435-957c-5b47a0368962-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 20 00:24:12 crc kubenswrapper[5107]: I0220 00:24:12.760555 5107 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/102ebadc-175f-4435-957c-5b47a0368962-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 20 00:24:12 crc kubenswrapper[5107]: I0220 00:24:12.760568 5107 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/102ebadc-175f-4435-957c-5b47a0368962-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 20 00:24:12 crc kubenswrapper[5107]: I0220 00:24:12.760579 5107 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/102ebadc-175f-4435-957c-5b47a0368962-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 20 00:24:12 crc kubenswrapper[5107]: I0220 00:24:12.760590 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/102ebadc-175f-4435-957c-5b47a0368962-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 20 00:24:12 crc kubenswrapper[5107]: I0220 00:24:12.760600 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/102ebadc-175f-4435-957c-5b47a0368962-builder-dockercfg-56v9d-push\") on node \"crc\" DevicePath \"\"" Feb 20 00:24:12 crc kubenswrapper[5107]: I0220 00:24:12.862027 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/102ebadc-175f-4435-957c-5b47a0368962-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "102ebadc-175f-4435-957c-5b47a0368962" (UID: "102ebadc-175f-4435-957c-5b47a0368962"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:24:12 crc kubenswrapper[5107]: I0220 00:24:12.964266 5107 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/102ebadc-175f-4435-957c-5b47a0368962-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 20 00:24:13 crc kubenswrapper[5107]: I0220 00:24:13.185641 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Feb 20 00:24:13 crc kubenswrapper[5107]: I0220 00:24:13.185663 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"102ebadc-175f-4435-957c-5b47a0368962","Type":"ContainerDied","Data":"8d0da6f8e061f45c465ad1a8c739162440de0f6013812e84c22d834f6b24e509"} Feb 20 00:24:13 crc kubenswrapper[5107]: I0220 00:24:13.186395 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d0da6f8e061f45c465ad1a8c739162440de0f6013812e84c22d834f6b24e509" Feb 20 00:24:14 crc kubenswrapper[5107]: I0220 00:24:14.955601 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/102ebadc-175f-4435-957c-5b47a0368962-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "102ebadc-175f-4435-957c-5b47a0368962" (UID: "102ebadc-175f-4435-957c-5b47a0368962"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:24:14 crc kubenswrapper[5107]: I0220 00:24:14.998366 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/102ebadc-175f-4435-957c-5b47a0368962-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 20 00:24:16 crc kubenswrapper[5107]: I0220 00:24:16.892803 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-1-build"] Feb 20 00:24:16 crc kubenswrapper[5107]: I0220 00:24:16.893726 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="94ab841c-7137-4098-ab2a-00c93ae365cd" containerName="oc" Feb 20 00:24:16 crc kubenswrapper[5107]: I0220 00:24:16.893741 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="94ab841c-7137-4098-ab2a-00c93ae365cd" containerName="oc" Feb 20 00:24:16 crc kubenswrapper[5107]: I0220 00:24:16.893760 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="be5bdfba-f643-4d0c-9def-c849da707717" containerName="extract-utilities" Feb 20 00:24:16 crc kubenswrapper[5107]: I0220 00:24:16.893768 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="be5bdfba-f643-4d0c-9def-c849da707717" containerName="extract-utilities" Feb 20 00:24:16 crc kubenswrapper[5107]: I0220 00:24:16.893794 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="be5bdfba-f643-4d0c-9def-c849da707717" containerName="extract-content" Feb 20 00:24:16 crc kubenswrapper[5107]: I0220 00:24:16.893801 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="be5bdfba-f643-4d0c-9def-c849da707717" containerName="extract-content" Feb 20 00:24:16 crc kubenswrapper[5107]: I0220 00:24:16.893817 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="102ebadc-175f-4435-957c-5b47a0368962" containerName="git-clone" Feb 20 00:24:16 crc kubenswrapper[5107]: I0220 00:24:16.893824 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="102ebadc-175f-4435-957c-5b47a0368962" containerName="git-clone" Feb 20 00:24:16 crc kubenswrapper[5107]: I0220 00:24:16.893834 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="be5bdfba-f643-4d0c-9def-c849da707717" containerName="registry-server" Feb 20 00:24:16 crc kubenswrapper[5107]: I0220 00:24:16.893841 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="be5bdfba-f643-4d0c-9def-c849da707717" containerName="registry-server" Feb 20 00:24:16 crc kubenswrapper[5107]: I0220 00:24:16.893851 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="102ebadc-175f-4435-957c-5b47a0368962" containerName="manage-dockerfile" Feb 20 00:24:16 crc kubenswrapper[5107]: I0220 00:24:16.893858 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="102ebadc-175f-4435-957c-5b47a0368962" containerName="manage-dockerfile" Feb 20 00:24:16 crc kubenswrapper[5107]: I0220 00:24:16.893868 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="102ebadc-175f-4435-957c-5b47a0368962" containerName="docker-build" Feb 20 00:24:16 crc kubenswrapper[5107]: I0220 00:24:16.893874 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="102ebadc-175f-4435-957c-5b47a0368962" containerName="docker-build" Feb 20 00:24:16 crc kubenswrapper[5107]: I0220 00:24:16.893989 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="94ab841c-7137-4098-ab2a-00c93ae365cd" containerName="oc" Feb 20 00:24:16 crc kubenswrapper[5107]: I0220 00:24:16.894001 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="be5bdfba-f643-4d0c-9def-c849da707717" containerName="registry-server" Feb 20 00:24:16 crc kubenswrapper[5107]: I0220 00:24:16.894017 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="102ebadc-175f-4435-957c-5b47a0368962" containerName="docker-build" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.224613 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.225456 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.228886 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-56v9d\"" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.229053 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-1-ca\"" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.229409 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-1-sys-config\"" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.231844 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-1-global-ca\"" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.334826 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5773c0c1-b840-4692-a711-a4f08f32a2da-buildworkdir\") pod \"sg-core-1-build\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " pod="service-telemetry/sg-core-1-build" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.334905 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5773c0c1-b840-4692-a711-a4f08f32a2da-buildcachedir\") pod \"sg-core-1-build\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " pod="service-telemetry/sg-core-1-build" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.334982 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5773c0c1-b840-4692-a711-a4f08f32a2da-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " pod="service-telemetry/sg-core-1-build" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.335015 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhrjn\" (UniqueName: \"kubernetes.io/projected/5773c0c1-b840-4692-a711-a4f08f32a2da-kube-api-access-hhrjn\") pod \"sg-core-1-build\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " pod="service-telemetry/sg-core-1-build" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.335062 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/5773c0c1-b840-4692-a711-a4f08f32a2da-builder-dockercfg-56v9d-pull\") pod \"sg-core-1-build\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " pod="service-telemetry/sg-core-1-build" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.335093 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5773c0c1-b840-4692-a711-a4f08f32a2da-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " pod="service-telemetry/sg-core-1-build" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.335162 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/5773c0c1-b840-4692-a711-a4f08f32a2da-builder-dockercfg-56v9d-push\") pod \"sg-core-1-build\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " pod="service-telemetry/sg-core-1-build" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.335189 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5773c0c1-b840-4692-a711-a4f08f32a2da-container-storage-run\") pod \"sg-core-1-build\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " pod="service-telemetry/sg-core-1-build" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.335216 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5773c0c1-b840-4692-a711-a4f08f32a2da-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " pod="service-telemetry/sg-core-1-build" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.335262 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5773c0c1-b840-4692-a711-a4f08f32a2da-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " pod="service-telemetry/sg-core-1-build" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.335297 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5773c0c1-b840-4692-a711-a4f08f32a2da-build-system-configs\") pod \"sg-core-1-build\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " pod="service-telemetry/sg-core-1-build" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.335378 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5773c0c1-b840-4692-a711-a4f08f32a2da-container-storage-root\") pod \"sg-core-1-build\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " pod="service-telemetry/sg-core-1-build" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.436709 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5773c0c1-b840-4692-a711-a4f08f32a2da-container-storage-root\") pod \"sg-core-1-build\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " pod="service-telemetry/sg-core-1-build" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.436901 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5773c0c1-b840-4692-a711-a4f08f32a2da-buildworkdir\") pod \"sg-core-1-build\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " pod="service-telemetry/sg-core-1-build" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.436949 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5773c0c1-b840-4692-a711-a4f08f32a2da-buildcachedir\") pod \"sg-core-1-build\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " pod="service-telemetry/sg-core-1-build" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.436982 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5773c0c1-b840-4692-a711-a4f08f32a2da-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " pod="service-telemetry/sg-core-1-build" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.437019 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hhrjn\" (UniqueName: \"kubernetes.io/projected/5773c0c1-b840-4692-a711-a4f08f32a2da-kube-api-access-hhrjn\") pod \"sg-core-1-build\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " pod="service-telemetry/sg-core-1-build" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.437053 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/5773c0c1-b840-4692-a711-a4f08f32a2da-builder-dockercfg-56v9d-pull\") pod \"sg-core-1-build\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " pod="service-telemetry/sg-core-1-build" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.437103 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5773c0c1-b840-4692-a711-a4f08f32a2da-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " pod="service-telemetry/sg-core-1-build" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.437164 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/5773c0c1-b840-4692-a711-a4f08f32a2da-builder-dockercfg-56v9d-push\") pod \"sg-core-1-build\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " pod="service-telemetry/sg-core-1-build" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.437199 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5773c0c1-b840-4692-a711-a4f08f32a2da-container-storage-run\") pod \"sg-core-1-build\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " pod="service-telemetry/sg-core-1-build" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.437231 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5773c0c1-b840-4692-a711-a4f08f32a2da-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " pod="service-telemetry/sg-core-1-build" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.437262 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5773c0c1-b840-4692-a711-a4f08f32a2da-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " pod="service-telemetry/sg-core-1-build" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.437308 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5773c0c1-b840-4692-a711-a4f08f32a2da-build-system-configs\") pod \"sg-core-1-build\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " pod="service-telemetry/sg-core-1-build" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.437495 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5773c0c1-b840-4692-a711-a4f08f32a2da-container-storage-root\") pod \"sg-core-1-build\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " pod="service-telemetry/sg-core-1-build" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.438202 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5773c0c1-b840-4692-a711-a4f08f32a2da-build-system-configs\") pod \"sg-core-1-build\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " pod="service-telemetry/sg-core-1-build" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.438339 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5773c0c1-b840-4692-a711-a4f08f32a2da-buildcachedir\") pod \"sg-core-1-build\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " pod="service-telemetry/sg-core-1-build" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.438394 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5773c0c1-b840-4692-a711-a4f08f32a2da-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " pod="service-telemetry/sg-core-1-build" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.440401 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5773c0c1-b840-4692-a711-a4f08f32a2da-container-storage-run\") pod \"sg-core-1-build\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " pod="service-telemetry/sg-core-1-build" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.440693 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5773c0c1-b840-4692-a711-a4f08f32a2da-buildworkdir\") pod \"sg-core-1-build\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " pod="service-telemetry/sg-core-1-build" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.440862 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5773c0c1-b840-4692-a711-a4f08f32a2da-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " pod="service-telemetry/sg-core-1-build" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.441729 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5773c0c1-b840-4692-a711-a4f08f32a2da-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " pod="service-telemetry/sg-core-1-build" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.441737 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5773c0c1-b840-4692-a711-a4f08f32a2da-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " pod="service-telemetry/sg-core-1-build" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.449203 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/5773c0c1-b840-4692-a711-a4f08f32a2da-builder-dockercfg-56v9d-pull\") pod \"sg-core-1-build\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " pod="service-telemetry/sg-core-1-build" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.450615 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/5773c0c1-b840-4692-a711-a4f08f32a2da-builder-dockercfg-56v9d-push\") pod \"sg-core-1-build\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " pod="service-telemetry/sg-core-1-build" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.466083 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhrjn\" (UniqueName: \"kubernetes.io/projected/5773c0c1-b840-4692-a711-a4f08f32a2da-kube-api-access-hhrjn\") pod \"sg-core-1-build\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " pod="service-telemetry/sg-core-1-build" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.559554 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Feb 20 00:24:17 crc kubenswrapper[5107]: I0220 00:24:17.804330 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Feb 20 00:24:17 crc kubenswrapper[5107]: W0220 00:24:17.811072 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5773c0c1_b840_4692_a711_a4f08f32a2da.slice/crio-126e7e9f0b3c17a25d1dbfb66f79265fcd55a2cb4b145594f9be9ccb94a4875c WatchSource:0}: Error finding container 126e7e9f0b3c17a25d1dbfb66f79265fcd55a2cb4b145594f9be9ccb94a4875c: Status 404 returned error can't find the container with id 126e7e9f0b3c17a25d1dbfb66f79265fcd55a2cb4b145594f9be9ccb94a4875c Feb 20 00:24:18 crc kubenswrapper[5107]: I0220 00:24:18.233445 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"5773c0c1-b840-4692-a711-a4f08f32a2da","Type":"ContainerStarted","Data":"effab874e038c52c07ce6eb771157e50b0c6b6c352ffda8ad06bb0d585fdebea"} Feb 20 00:24:18 crc kubenswrapper[5107]: I0220 00:24:18.233510 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"5773c0c1-b840-4692-a711-a4f08f32a2da","Type":"ContainerStarted","Data":"126e7e9f0b3c17a25d1dbfb66f79265fcd55a2cb4b145594f9be9ccb94a4875c"} Feb 20 00:24:19 crc kubenswrapper[5107]: I0220 00:24:19.242845 5107 generic.go:358] "Generic (PLEG): container finished" podID="5773c0c1-b840-4692-a711-a4f08f32a2da" containerID="effab874e038c52c07ce6eb771157e50b0c6b6c352ffda8ad06bb0d585fdebea" exitCode=0 Feb 20 00:24:19 crc kubenswrapper[5107]: I0220 00:24:19.242906 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"5773c0c1-b840-4692-a711-a4f08f32a2da","Type":"ContainerDied","Data":"effab874e038c52c07ce6eb771157e50b0c6b6c352ffda8ad06bb0d585fdebea"} Feb 20 00:24:20 crc kubenswrapper[5107]: I0220 00:24:20.254203 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"5773c0c1-b840-4692-a711-a4f08f32a2da","Type":"ContainerStarted","Data":"7fc7596bb82f7824aca74405caefa1921c1b97cad45c9f70426a774b8c38fb7e"} Feb 20 00:24:20 crc kubenswrapper[5107]: I0220 00:24:20.285049 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-1-build" podStartSLOduration=4.285022748 podStartE2EDuration="4.285022748s" podCreationTimestamp="2026-02-20 00:24:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:24:20.282343323 +0000 UTC m=+946.651000899" watchObservedRunningTime="2026-02-20 00:24:20.285022748 +0000 UTC m=+946.653680354" Feb 20 00:24:27 crc kubenswrapper[5107]: I0220 00:24:27.207638 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Feb 20 00:24:27 crc kubenswrapper[5107]: I0220 00:24:27.208580 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/sg-core-1-build" podUID="5773c0c1-b840-4692-a711-a4f08f32a2da" containerName="docker-build" containerID="cri-o://7fc7596bb82f7824aca74405caefa1921c1b97cad45c9f70426a774b8c38fb7e" gracePeriod=30 Feb 20 00:24:27 crc kubenswrapper[5107]: I0220 00:24:27.734467 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_5773c0c1-b840-4692-a711-a4f08f32a2da/docker-build/0.log" Feb 20 00:24:27 crc kubenswrapper[5107]: I0220 00:24:27.735330 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Feb 20 00:24:27 crc kubenswrapper[5107]: I0220 00:24:27.910244 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5773c0c1-b840-4692-a711-a4f08f32a2da-container-storage-root\") pod \"5773c0c1-b840-4692-a711-a4f08f32a2da\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " Feb 20 00:24:27 crc kubenswrapper[5107]: I0220 00:24:27.910384 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5773c0c1-b840-4692-a711-a4f08f32a2da-build-blob-cache\") pod \"5773c0c1-b840-4692-a711-a4f08f32a2da\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " Feb 20 00:24:27 crc kubenswrapper[5107]: I0220 00:24:27.910429 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5773c0c1-b840-4692-a711-a4f08f32a2da-buildworkdir\") pod \"5773c0c1-b840-4692-a711-a4f08f32a2da\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " Feb 20 00:24:27 crc kubenswrapper[5107]: I0220 00:24:27.910458 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/5773c0c1-b840-4692-a711-a4f08f32a2da-builder-dockercfg-56v9d-push\") pod \"5773c0c1-b840-4692-a711-a4f08f32a2da\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " Feb 20 00:24:27 crc kubenswrapper[5107]: I0220 00:24:27.910518 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5773c0c1-b840-4692-a711-a4f08f32a2da-node-pullsecrets\") pod \"5773c0c1-b840-4692-a711-a4f08f32a2da\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " Feb 20 00:24:27 crc kubenswrapper[5107]: I0220 00:24:27.910554 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/5773c0c1-b840-4692-a711-a4f08f32a2da-builder-dockercfg-56v9d-pull\") pod \"5773c0c1-b840-4692-a711-a4f08f32a2da\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " Feb 20 00:24:27 crc kubenswrapper[5107]: I0220 00:24:27.910595 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhrjn\" (UniqueName: \"kubernetes.io/projected/5773c0c1-b840-4692-a711-a4f08f32a2da-kube-api-access-hhrjn\") pod \"5773c0c1-b840-4692-a711-a4f08f32a2da\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " Feb 20 00:24:27 crc kubenswrapper[5107]: I0220 00:24:27.910636 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5773c0c1-b840-4692-a711-a4f08f32a2da-build-system-configs\") pod \"5773c0c1-b840-4692-a711-a4f08f32a2da\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " Feb 20 00:24:27 crc kubenswrapper[5107]: I0220 00:24:27.910665 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5773c0c1-b840-4692-a711-a4f08f32a2da-build-proxy-ca-bundles\") pod \"5773c0c1-b840-4692-a711-a4f08f32a2da\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " Feb 20 00:24:27 crc kubenswrapper[5107]: I0220 00:24:27.910683 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5773c0c1-b840-4692-a711-a4f08f32a2da-buildcachedir\") pod \"5773c0c1-b840-4692-a711-a4f08f32a2da\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " Feb 20 00:24:27 crc kubenswrapper[5107]: I0220 00:24:27.910704 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5773c0c1-b840-4692-a711-a4f08f32a2da-container-storage-run\") pod \"5773c0c1-b840-4692-a711-a4f08f32a2da\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " Feb 20 00:24:27 crc kubenswrapper[5107]: I0220 00:24:27.910751 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5773c0c1-b840-4692-a711-a4f08f32a2da-build-ca-bundles\") pod \"5773c0c1-b840-4692-a711-a4f08f32a2da\" (UID: \"5773c0c1-b840-4692-a711-a4f08f32a2da\") " Feb 20 00:24:27 crc kubenswrapper[5107]: I0220 00:24:27.910911 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5773c0c1-b840-4692-a711-a4f08f32a2da-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "5773c0c1-b840-4692-a711-a4f08f32a2da" (UID: "5773c0c1-b840-4692-a711-a4f08f32a2da"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:24:27 crc kubenswrapper[5107]: I0220 00:24:27.911085 5107 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5773c0c1-b840-4692-a711-a4f08f32a2da-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 20 00:24:27 crc kubenswrapper[5107]: I0220 00:24:27.911111 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5773c0c1-b840-4692-a711-a4f08f32a2da-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "5773c0c1-b840-4692-a711-a4f08f32a2da" (UID: "5773c0c1-b840-4692-a711-a4f08f32a2da"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:24:27 crc kubenswrapper[5107]: I0220 00:24:27.911132 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5773c0c1-b840-4692-a711-a4f08f32a2da-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "5773c0c1-b840-4692-a711-a4f08f32a2da" (UID: "5773c0c1-b840-4692-a711-a4f08f32a2da"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:24:27 crc kubenswrapper[5107]: I0220 00:24:27.911690 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5773c0c1-b840-4692-a711-a4f08f32a2da-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "5773c0c1-b840-4692-a711-a4f08f32a2da" (UID: "5773c0c1-b840-4692-a711-a4f08f32a2da"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:24:27 crc kubenswrapper[5107]: I0220 00:24:27.911711 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5773c0c1-b840-4692-a711-a4f08f32a2da-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "5773c0c1-b840-4692-a711-a4f08f32a2da" (UID: "5773c0c1-b840-4692-a711-a4f08f32a2da"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:24:27 crc kubenswrapper[5107]: I0220 00:24:27.911934 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5773c0c1-b840-4692-a711-a4f08f32a2da-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "5773c0c1-b840-4692-a711-a4f08f32a2da" (UID: "5773c0c1-b840-4692-a711-a4f08f32a2da"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:24:27 crc kubenswrapper[5107]: I0220 00:24:27.912053 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5773c0c1-b840-4692-a711-a4f08f32a2da-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "5773c0c1-b840-4692-a711-a4f08f32a2da" (UID: "5773c0c1-b840-4692-a711-a4f08f32a2da"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:24:27 crc kubenswrapper[5107]: I0220 00:24:27.918392 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5773c0c1-b840-4692-a711-a4f08f32a2da-kube-api-access-hhrjn" (OuterVolumeSpecName: "kube-api-access-hhrjn") pod "5773c0c1-b840-4692-a711-a4f08f32a2da" (UID: "5773c0c1-b840-4692-a711-a4f08f32a2da"). InnerVolumeSpecName "kube-api-access-hhrjn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:24:27 crc kubenswrapper[5107]: I0220 00:24:27.918498 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5773c0c1-b840-4692-a711-a4f08f32a2da-builder-dockercfg-56v9d-push" (OuterVolumeSpecName: "builder-dockercfg-56v9d-push") pod "5773c0c1-b840-4692-a711-a4f08f32a2da" (UID: "5773c0c1-b840-4692-a711-a4f08f32a2da"). InnerVolumeSpecName "builder-dockercfg-56v9d-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:24:27 crc kubenswrapper[5107]: I0220 00:24:27.920065 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5773c0c1-b840-4692-a711-a4f08f32a2da-builder-dockercfg-56v9d-pull" (OuterVolumeSpecName: "builder-dockercfg-56v9d-pull") pod "5773c0c1-b840-4692-a711-a4f08f32a2da" (UID: "5773c0c1-b840-4692-a711-a4f08f32a2da"). InnerVolumeSpecName "builder-dockercfg-56v9d-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.007638 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5773c0c1-b840-4692-a711-a4f08f32a2da-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "5773c0c1-b840-4692-a711-a4f08f32a2da" (UID: "5773c0c1-b840-4692-a711-a4f08f32a2da"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.012701 5107 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5773c0c1-b840-4692-a711-a4f08f32a2da-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.012911 5107 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5773c0c1-b840-4692-a711-a4f08f32a2da-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.013044 5107 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5773c0c1-b840-4692-a711-a4f08f32a2da-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.013199 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/5773c0c1-b840-4692-a711-a4f08f32a2da-builder-dockercfg-56v9d-push\") on node \"crc\" DevicePath \"\"" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.013343 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/5773c0c1-b840-4692-a711-a4f08f32a2da-builder-dockercfg-56v9d-pull\") on node \"crc\" DevicePath \"\"" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.013557 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hhrjn\" (UniqueName: \"kubernetes.io/projected/5773c0c1-b840-4692-a711-a4f08f32a2da-kube-api-access-hhrjn\") on node \"crc\" DevicePath \"\"" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.013685 5107 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5773c0c1-b840-4692-a711-a4f08f32a2da-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.013807 5107 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5773c0c1-b840-4692-a711-a4f08f32a2da-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.013979 5107 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5773c0c1-b840-4692-a711-a4f08f32a2da-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.014114 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5773c0c1-b840-4692-a711-a4f08f32a2da-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.037239 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5773c0c1-b840-4692-a711-a4f08f32a2da-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "5773c0c1-b840-4692-a711-a4f08f32a2da" (UID: "5773c0c1-b840-4692-a711-a4f08f32a2da"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.115747 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5773c0c1-b840-4692-a711-a4f08f32a2da-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.325818 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_5773c0c1-b840-4692-a711-a4f08f32a2da/docker-build/0.log" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.326580 5107 generic.go:358] "Generic (PLEG): container finished" podID="5773c0c1-b840-4692-a711-a4f08f32a2da" containerID="7fc7596bb82f7824aca74405caefa1921c1b97cad45c9f70426a774b8c38fb7e" exitCode=1 Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.326750 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.326837 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"5773c0c1-b840-4692-a711-a4f08f32a2da","Type":"ContainerDied","Data":"7fc7596bb82f7824aca74405caefa1921c1b97cad45c9f70426a774b8c38fb7e"} Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.326932 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"5773c0c1-b840-4692-a711-a4f08f32a2da","Type":"ContainerDied","Data":"126e7e9f0b3c17a25d1dbfb66f79265fcd55a2cb4b145594f9be9ccb94a4875c"} Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.327016 5107 scope.go:117] "RemoveContainer" containerID="7fc7596bb82f7824aca74405caefa1921c1b97cad45c9f70426a774b8c38fb7e" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.394586 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.404508 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-core-1-build"] Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.405968 5107 scope.go:117] "RemoveContainer" containerID="effab874e038c52c07ce6eb771157e50b0c6b6c352ffda8ad06bb0d585fdebea" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.475953 5107 scope.go:117] "RemoveContainer" containerID="7fc7596bb82f7824aca74405caefa1921c1b97cad45c9f70426a774b8c38fb7e" Feb 20 00:24:28 crc kubenswrapper[5107]: E0220 00:24:28.476764 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fc7596bb82f7824aca74405caefa1921c1b97cad45c9f70426a774b8c38fb7e\": container with ID starting with 7fc7596bb82f7824aca74405caefa1921c1b97cad45c9f70426a774b8c38fb7e not found: ID does not exist" containerID="7fc7596bb82f7824aca74405caefa1921c1b97cad45c9f70426a774b8c38fb7e" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.476812 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fc7596bb82f7824aca74405caefa1921c1b97cad45c9f70426a774b8c38fb7e"} err="failed to get container status \"7fc7596bb82f7824aca74405caefa1921c1b97cad45c9f70426a774b8c38fb7e\": rpc error: code = NotFound desc = could not find container \"7fc7596bb82f7824aca74405caefa1921c1b97cad45c9f70426a774b8c38fb7e\": container with ID starting with 7fc7596bb82f7824aca74405caefa1921c1b97cad45c9f70426a774b8c38fb7e not found: ID does not exist" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.476836 5107 scope.go:117] "RemoveContainer" containerID="effab874e038c52c07ce6eb771157e50b0c6b6c352ffda8ad06bb0d585fdebea" Feb 20 00:24:28 crc kubenswrapper[5107]: E0220 00:24:28.477504 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"effab874e038c52c07ce6eb771157e50b0c6b6c352ffda8ad06bb0d585fdebea\": container with ID starting with effab874e038c52c07ce6eb771157e50b0c6b6c352ffda8ad06bb0d585fdebea not found: ID does not exist" containerID="effab874e038c52c07ce6eb771157e50b0c6b6c352ffda8ad06bb0d585fdebea" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.477544 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"effab874e038c52c07ce6eb771157e50b0c6b6c352ffda8ad06bb0d585fdebea"} err="failed to get container status \"effab874e038c52c07ce6eb771157e50b0c6b6c352ffda8ad06bb0d585fdebea\": rpc error: code = NotFound desc = could not find container \"effab874e038c52c07ce6eb771157e50b0c6b6c352ffda8ad06bb0d585fdebea\": container with ID starting with effab874e038c52c07ce6eb771157e50b0c6b6c352ffda8ad06bb0d585fdebea not found: ID does not exist" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.499365 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5773c0c1-b840-4692-a711-a4f08f32a2da" path="/var/lib/kubelet/pods/5773c0c1-b840-4692-a711-a4f08f32a2da/volumes" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.864990 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-2-build"] Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.865844 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5773c0c1-b840-4692-a711-a4f08f32a2da" containerName="manage-dockerfile" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.865863 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="5773c0c1-b840-4692-a711-a4f08f32a2da" containerName="manage-dockerfile" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.865882 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5773c0c1-b840-4692-a711-a4f08f32a2da" containerName="docker-build" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.865891 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="5773c0c1-b840-4692-a711-a4f08f32a2da" containerName="docker-build" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.866022 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="5773c0c1-b840-4692-a711-a4f08f32a2da" containerName="docker-build" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.889931 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.889930 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.892118 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-56v9d\"" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.892722 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-2-ca\"" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.894296 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-2-sys-config\"" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.894654 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-2-global-ca\"" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.928554 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq26g\" (UniqueName: \"kubernetes.io/projected/55de4a7e-258c-44be-ae15-b6a2acebfd00-kube-api-access-dq26g\") pod \"sg-core-2-build\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " pod="service-telemetry/sg-core-2-build" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.928670 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55de4a7e-258c-44be-ae15-b6a2acebfd00-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " pod="service-telemetry/sg-core-2-build" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.928727 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/55de4a7e-258c-44be-ae15-b6a2acebfd00-buildworkdir\") pod \"sg-core-2-build\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " pod="service-telemetry/sg-core-2-build" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.928769 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/55de4a7e-258c-44be-ae15-b6a2acebfd00-buildcachedir\") pod \"sg-core-2-build\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " pod="service-telemetry/sg-core-2-build" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.928829 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/55de4a7e-258c-44be-ae15-b6a2acebfd00-container-storage-run\") pod \"sg-core-2-build\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " pod="service-telemetry/sg-core-2-build" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.928859 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/55de4a7e-258c-44be-ae15-b6a2acebfd00-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " pod="service-telemetry/sg-core-2-build" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.928902 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/55de4a7e-258c-44be-ae15-b6a2acebfd00-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " pod="service-telemetry/sg-core-2-build" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.928938 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/55de4a7e-258c-44be-ae15-b6a2acebfd00-build-system-configs\") pod \"sg-core-2-build\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " pod="service-telemetry/sg-core-2-build" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.928991 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/55de4a7e-258c-44be-ae15-b6a2acebfd00-container-storage-root\") pod \"sg-core-2-build\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " pod="service-telemetry/sg-core-2-build" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.929122 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/55de4a7e-258c-44be-ae15-b6a2acebfd00-builder-dockercfg-56v9d-push\") pod \"sg-core-2-build\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " pod="service-telemetry/sg-core-2-build" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.929236 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55de4a7e-258c-44be-ae15-b6a2acebfd00-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " pod="service-telemetry/sg-core-2-build" Feb 20 00:24:28 crc kubenswrapper[5107]: I0220 00:24:28.929267 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/55de4a7e-258c-44be-ae15-b6a2acebfd00-builder-dockercfg-56v9d-pull\") pod \"sg-core-2-build\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " pod="service-telemetry/sg-core-2-build" Feb 20 00:24:29 crc kubenswrapper[5107]: I0220 00:24:29.030735 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/55de4a7e-258c-44be-ae15-b6a2acebfd00-container-storage-run\") pod \"sg-core-2-build\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " pod="service-telemetry/sg-core-2-build" Feb 20 00:24:29 crc kubenswrapper[5107]: I0220 00:24:29.030779 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/55de4a7e-258c-44be-ae15-b6a2acebfd00-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " pod="service-telemetry/sg-core-2-build" Feb 20 00:24:29 crc kubenswrapper[5107]: I0220 00:24:29.030802 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/55de4a7e-258c-44be-ae15-b6a2acebfd00-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " pod="service-telemetry/sg-core-2-build" Feb 20 00:24:29 crc kubenswrapper[5107]: I0220 00:24:29.030827 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/55de4a7e-258c-44be-ae15-b6a2acebfd00-build-system-configs\") pod \"sg-core-2-build\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " pod="service-telemetry/sg-core-2-build" Feb 20 00:24:29 crc kubenswrapper[5107]: I0220 00:24:29.030850 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/55de4a7e-258c-44be-ae15-b6a2acebfd00-container-storage-root\") pod \"sg-core-2-build\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " pod="service-telemetry/sg-core-2-build" Feb 20 00:24:29 crc kubenswrapper[5107]: I0220 00:24:29.030879 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/55de4a7e-258c-44be-ae15-b6a2acebfd00-builder-dockercfg-56v9d-push\") pod \"sg-core-2-build\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " pod="service-telemetry/sg-core-2-build" Feb 20 00:24:29 crc kubenswrapper[5107]: I0220 00:24:29.030914 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55de4a7e-258c-44be-ae15-b6a2acebfd00-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " pod="service-telemetry/sg-core-2-build" Feb 20 00:24:29 crc kubenswrapper[5107]: I0220 00:24:29.030935 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/55de4a7e-258c-44be-ae15-b6a2acebfd00-builder-dockercfg-56v9d-pull\") pod \"sg-core-2-build\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " pod="service-telemetry/sg-core-2-build" Feb 20 00:24:29 crc kubenswrapper[5107]: I0220 00:24:29.030984 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dq26g\" (UniqueName: \"kubernetes.io/projected/55de4a7e-258c-44be-ae15-b6a2acebfd00-kube-api-access-dq26g\") pod \"sg-core-2-build\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " pod="service-telemetry/sg-core-2-build" Feb 20 00:24:29 crc kubenswrapper[5107]: I0220 00:24:29.031018 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55de4a7e-258c-44be-ae15-b6a2acebfd00-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " pod="service-telemetry/sg-core-2-build" Feb 20 00:24:29 crc kubenswrapper[5107]: I0220 00:24:29.031047 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/55de4a7e-258c-44be-ae15-b6a2acebfd00-buildworkdir\") pod \"sg-core-2-build\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " pod="service-telemetry/sg-core-2-build" Feb 20 00:24:29 crc kubenswrapper[5107]: I0220 00:24:29.031077 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/55de4a7e-258c-44be-ae15-b6a2acebfd00-buildcachedir\") pod \"sg-core-2-build\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " pod="service-telemetry/sg-core-2-build" Feb 20 00:24:29 crc kubenswrapper[5107]: I0220 00:24:29.031172 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/55de4a7e-258c-44be-ae15-b6a2acebfd00-container-storage-run\") pod \"sg-core-2-build\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " pod="service-telemetry/sg-core-2-build" Feb 20 00:24:29 crc kubenswrapper[5107]: I0220 00:24:29.031179 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/55de4a7e-258c-44be-ae15-b6a2acebfd00-buildcachedir\") pod \"sg-core-2-build\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " pod="service-telemetry/sg-core-2-build" Feb 20 00:24:29 crc kubenswrapper[5107]: I0220 00:24:29.031398 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/55de4a7e-258c-44be-ae15-b6a2acebfd00-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " pod="service-telemetry/sg-core-2-build" Feb 20 00:24:29 crc kubenswrapper[5107]: I0220 00:24:29.031596 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/55de4a7e-258c-44be-ae15-b6a2acebfd00-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " pod="service-telemetry/sg-core-2-build" Feb 20 00:24:29 crc kubenswrapper[5107]: I0220 00:24:29.031834 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/55de4a7e-258c-44be-ae15-b6a2acebfd00-build-system-configs\") pod \"sg-core-2-build\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " pod="service-telemetry/sg-core-2-build" Feb 20 00:24:29 crc kubenswrapper[5107]: I0220 00:24:29.031845 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55de4a7e-258c-44be-ae15-b6a2acebfd00-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " pod="service-telemetry/sg-core-2-build" Feb 20 00:24:29 crc kubenswrapper[5107]: I0220 00:24:29.031874 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/55de4a7e-258c-44be-ae15-b6a2acebfd00-container-storage-root\") pod \"sg-core-2-build\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " pod="service-telemetry/sg-core-2-build" Feb 20 00:24:29 crc kubenswrapper[5107]: I0220 00:24:29.032297 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55de4a7e-258c-44be-ae15-b6a2acebfd00-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " pod="service-telemetry/sg-core-2-build" Feb 20 00:24:29 crc kubenswrapper[5107]: I0220 00:24:29.032600 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/55de4a7e-258c-44be-ae15-b6a2acebfd00-buildworkdir\") pod \"sg-core-2-build\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " pod="service-telemetry/sg-core-2-build" Feb 20 00:24:29 crc kubenswrapper[5107]: I0220 00:24:29.037028 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/55de4a7e-258c-44be-ae15-b6a2acebfd00-builder-dockercfg-56v9d-push\") pod \"sg-core-2-build\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " pod="service-telemetry/sg-core-2-build" Feb 20 00:24:29 crc kubenswrapper[5107]: I0220 00:24:29.037080 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/55de4a7e-258c-44be-ae15-b6a2acebfd00-builder-dockercfg-56v9d-pull\") pod \"sg-core-2-build\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " pod="service-telemetry/sg-core-2-build" Feb 20 00:24:29 crc kubenswrapper[5107]: I0220 00:24:29.055589 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq26g\" (UniqueName: \"kubernetes.io/projected/55de4a7e-258c-44be-ae15-b6a2acebfd00-kube-api-access-dq26g\") pod \"sg-core-2-build\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " pod="service-telemetry/sg-core-2-build" Feb 20 00:24:29 crc kubenswrapper[5107]: I0220 00:24:29.242594 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Feb 20 00:24:29 crc kubenswrapper[5107]: I0220 00:24:29.484118 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Feb 20 00:24:29 crc kubenswrapper[5107]: I0220 00:24:29.493603 5107 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 20 00:24:30 crc kubenswrapper[5107]: I0220 00:24:30.344754 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"55de4a7e-258c-44be-ae15-b6a2acebfd00","Type":"ContainerStarted","Data":"f83ea64986a828099922b9a822b1c1a991c1f3379fc2c8374c83370b85429933"} Feb 20 00:24:30 crc kubenswrapper[5107]: I0220 00:24:30.345294 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"55de4a7e-258c-44be-ae15-b6a2acebfd00","Type":"ContainerStarted","Data":"34c6167762d58ed322c335d345a1d56e72d625d38723533358b1782d5d60728b"} Feb 20 00:24:31 crc kubenswrapper[5107]: I0220 00:24:31.360627 5107 generic.go:358] "Generic (PLEG): container finished" podID="55de4a7e-258c-44be-ae15-b6a2acebfd00" containerID="f83ea64986a828099922b9a822b1c1a991c1f3379fc2c8374c83370b85429933" exitCode=0 Feb 20 00:24:31 crc kubenswrapper[5107]: I0220 00:24:31.360782 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"55de4a7e-258c-44be-ae15-b6a2acebfd00","Type":"ContainerDied","Data":"f83ea64986a828099922b9a822b1c1a991c1f3379fc2c8374c83370b85429933"} Feb 20 00:24:32 crc kubenswrapper[5107]: I0220 00:24:32.374420 5107 generic.go:358] "Generic (PLEG): container finished" podID="55de4a7e-258c-44be-ae15-b6a2acebfd00" containerID="75e5ca2802a75cbc347ff6ab000987333db238c5324917a1b40a87e4d768334b" exitCode=0 Feb 20 00:24:32 crc kubenswrapper[5107]: I0220 00:24:32.375098 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"55de4a7e-258c-44be-ae15-b6a2acebfd00","Type":"ContainerDied","Data":"75e5ca2802a75cbc347ff6ab000987333db238c5324917a1b40a87e4d768334b"} Feb 20 00:24:32 crc kubenswrapper[5107]: I0220 00:24:32.416968 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_55de4a7e-258c-44be-ae15-b6a2acebfd00/manage-dockerfile/0.log" Feb 20 00:24:33 crc kubenswrapper[5107]: I0220 00:24:33.390390 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"55de4a7e-258c-44be-ae15-b6a2acebfd00","Type":"ContainerStarted","Data":"908ce36618f5d576fec04c878aeb0a3eceacf444ed66446dc10997bb8730242b"} Feb 20 00:24:33 crc kubenswrapper[5107]: I0220 00:24:33.438098 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-2-build" podStartSLOduration=5.438048804 podStartE2EDuration="5.438048804s" podCreationTimestamp="2026-02-20 00:24:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:24:33.429083793 +0000 UTC m=+959.797741449" watchObservedRunningTime="2026-02-20 00:24:33.438048804 +0000 UTC m=+959.806706400" Feb 20 00:24:35 crc kubenswrapper[5107]: I0220 00:24:35.128802 5107 scope.go:117] "RemoveContainer" containerID="78a86da67cbb12eed73cba0d469b3fe2d9584f25586e955d3f37b90b5878a7e5" Feb 20 00:26:00 crc kubenswrapper[5107]: I0220 00:26:00.146658 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29525786-5fsgl"] Feb 20 00:26:00 crc kubenswrapper[5107]: I0220 00:26:00.389489 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525786-5fsgl"] Feb 20 00:26:00 crc kubenswrapper[5107]: I0220 00:26:00.389652 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525786-5fsgl" Feb 20 00:26:00 crc kubenswrapper[5107]: I0220 00:26:00.395907 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 20 00:26:00 crc kubenswrapper[5107]: I0220 00:26:00.395958 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 20 00:26:00 crc kubenswrapper[5107]: I0220 00:26:00.396209 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-km7dp\"" Feb 20 00:26:00 crc kubenswrapper[5107]: I0220 00:26:00.497893 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9k5j\" (UniqueName: \"kubernetes.io/projected/1275ee6b-e46c-4c79-8c4f-e326dbab2b6b-kube-api-access-c9k5j\") pod \"auto-csr-approver-29525786-5fsgl\" (UID: \"1275ee6b-e46c-4c79-8c4f-e326dbab2b6b\") " pod="openshift-infra/auto-csr-approver-29525786-5fsgl" Feb 20 00:26:00 crc kubenswrapper[5107]: I0220 00:26:00.599096 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c9k5j\" (UniqueName: \"kubernetes.io/projected/1275ee6b-e46c-4c79-8c4f-e326dbab2b6b-kube-api-access-c9k5j\") pod \"auto-csr-approver-29525786-5fsgl\" (UID: \"1275ee6b-e46c-4c79-8c4f-e326dbab2b6b\") " pod="openshift-infra/auto-csr-approver-29525786-5fsgl" Feb 20 00:26:00 crc kubenswrapper[5107]: I0220 00:26:00.619034 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9k5j\" (UniqueName: \"kubernetes.io/projected/1275ee6b-e46c-4c79-8c4f-e326dbab2b6b-kube-api-access-c9k5j\") pod \"auto-csr-approver-29525786-5fsgl\" (UID: \"1275ee6b-e46c-4c79-8c4f-e326dbab2b6b\") " pod="openshift-infra/auto-csr-approver-29525786-5fsgl" Feb 20 00:26:00 crc kubenswrapper[5107]: I0220 00:26:00.714987 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525786-5fsgl" Feb 20 00:26:00 crc kubenswrapper[5107]: I0220 00:26:00.953009 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525786-5fsgl"] Feb 20 00:26:00 crc kubenswrapper[5107]: W0220 00:26:00.960420 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1275ee6b_e46c_4c79_8c4f_e326dbab2b6b.slice/crio-7e7d0b40afef51f89ad473e37c48e488330c6334a1721814b4a425f3c0fd8322 WatchSource:0}: Error finding container 7e7d0b40afef51f89ad473e37c48e488330c6334a1721814b4a425f3c0fd8322: Status 404 returned error can't find the container with id 7e7d0b40afef51f89ad473e37c48e488330c6334a1721814b4a425f3c0fd8322 Feb 20 00:26:01 crc kubenswrapper[5107]: I0220 00:26:01.064793 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525786-5fsgl" event={"ID":"1275ee6b-e46c-4c79-8c4f-e326dbab2b6b","Type":"ContainerStarted","Data":"7e7d0b40afef51f89ad473e37c48e488330c6334a1721814b4a425f3c0fd8322"} Feb 20 00:26:03 crc kubenswrapper[5107]: I0220 00:26:03.079867 5107 generic.go:358] "Generic (PLEG): container finished" podID="1275ee6b-e46c-4c79-8c4f-e326dbab2b6b" containerID="1fd70078f084d376ffcce1ca90d8a0e161a3a01cf7d0f62347b4fc1be9ab24b8" exitCode=0 Feb 20 00:26:03 crc kubenswrapper[5107]: I0220 00:26:03.079964 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525786-5fsgl" event={"ID":"1275ee6b-e46c-4c79-8c4f-e326dbab2b6b","Type":"ContainerDied","Data":"1fd70078f084d376ffcce1ca90d8a0e161a3a01cf7d0f62347b4fc1be9ab24b8"} Feb 20 00:26:04 crc kubenswrapper[5107]: I0220 00:26:04.382742 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525786-5fsgl" Feb 20 00:26:04 crc kubenswrapper[5107]: I0220 00:26:04.454754 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9k5j\" (UniqueName: \"kubernetes.io/projected/1275ee6b-e46c-4c79-8c4f-e326dbab2b6b-kube-api-access-c9k5j\") pod \"1275ee6b-e46c-4c79-8c4f-e326dbab2b6b\" (UID: \"1275ee6b-e46c-4c79-8c4f-e326dbab2b6b\") " Feb 20 00:26:04 crc kubenswrapper[5107]: I0220 00:26:04.479434 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1275ee6b-e46c-4c79-8c4f-e326dbab2b6b-kube-api-access-c9k5j" (OuterVolumeSpecName: "kube-api-access-c9k5j") pod "1275ee6b-e46c-4c79-8c4f-e326dbab2b6b" (UID: "1275ee6b-e46c-4c79-8c4f-e326dbab2b6b"). InnerVolumeSpecName "kube-api-access-c9k5j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:26:04 crc kubenswrapper[5107]: I0220 00:26:04.557119 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c9k5j\" (UniqueName: \"kubernetes.io/projected/1275ee6b-e46c-4c79-8c4f-e326dbab2b6b-kube-api-access-c9k5j\") on node \"crc\" DevicePath \"\"" Feb 20 00:26:05 crc kubenswrapper[5107]: I0220 00:26:05.098280 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525786-5fsgl" event={"ID":"1275ee6b-e46c-4c79-8c4f-e326dbab2b6b","Type":"ContainerDied","Data":"7e7d0b40afef51f89ad473e37c48e488330c6334a1721814b4a425f3c0fd8322"} Feb 20 00:26:05 crc kubenswrapper[5107]: I0220 00:26:05.098791 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e7d0b40afef51f89ad473e37c48e488330c6334a1721814b4a425f3c0fd8322" Feb 20 00:26:05 crc kubenswrapper[5107]: I0220 00:26:05.098302 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525786-5fsgl" Feb 20 00:26:05 crc kubenswrapper[5107]: I0220 00:26:05.464637 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29525780-fkb67"] Feb 20 00:26:05 crc kubenswrapper[5107]: I0220 00:26:05.475608 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29525780-fkb67"] Feb 20 00:26:06 crc kubenswrapper[5107]: I0220 00:26:06.493753 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="811bd17b-84bc-41bb-aaed-ca5fff6a638e" path="/var/lib/kubelet/pods/811bd17b-84bc-41bb-aaed-ca5fff6a638e/volumes" Feb 20 00:26:32 crc kubenswrapper[5107]: I0220 00:26:32.824918 5107 patch_prober.go:28] interesting pod/machine-config-daemon-5bqkx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:26:32 crc kubenswrapper[5107]: I0220 00:26:32.825488 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:26:35 crc kubenswrapper[5107]: I0220 00:26:35.277005 5107 scope.go:117] "RemoveContainer" containerID="741c24ae042c7236eec96bb6a2f2fa47bafe41a88ead0a05ee90586b8eda2654" Feb 20 00:27:02 crc kubenswrapper[5107]: I0220 00:27:02.824483 5107 patch_prober.go:28] interesting pod/machine-config-daemon-5bqkx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:27:02 crc kubenswrapper[5107]: I0220 00:27:02.825082 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:27:32 crc kubenswrapper[5107]: I0220 00:27:32.824421 5107 patch_prober.go:28] interesting pod/machine-config-daemon-5bqkx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:27:32 crc kubenswrapper[5107]: I0220 00:27:32.824877 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:27:32 crc kubenswrapper[5107]: I0220 00:27:32.824911 5107 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" Feb 20 00:27:32 crc kubenswrapper[5107]: I0220 00:27:32.825408 5107 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0f2d99740a54c1fb08d085d8cf733c3e7a30e596f0b9915e84a2b3b54f15c179"} pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 20 00:27:32 crc kubenswrapper[5107]: I0220 00:27:32.825460 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" containerName="machine-config-daemon" containerID="cri-o://0f2d99740a54c1fb08d085d8cf733c3e7a30e596f0b9915e84a2b3b54f15c179" gracePeriod=600 Feb 20 00:27:33 crc kubenswrapper[5107]: I0220 00:27:33.833109 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" event={"ID":"2a8cc693-438e-4d3b-8865-7d3907f9dc78","Type":"ContainerDied","Data":"0f2d99740a54c1fb08d085d8cf733c3e7a30e596f0b9915e84a2b3b54f15c179"} Feb 20 00:27:33 crc kubenswrapper[5107]: I0220 00:27:33.833673 5107 scope.go:117] "RemoveContainer" containerID="4236a7d7e51c71204dad21d268dc89276f97ace5da6c6fcbe95fa8e36b47948f" Feb 20 00:27:33 crc kubenswrapper[5107]: I0220 00:27:33.833045 5107 generic.go:358] "Generic (PLEG): container finished" podID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" containerID="0f2d99740a54c1fb08d085d8cf733c3e7a30e596f0b9915e84a2b3b54f15c179" exitCode=0 Feb 20 00:27:33 crc kubenswrapper[5107]: I0220 00:27:33.834024 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" event={"ID":"2a8cc693-438e-4d3b-8865-7d3907f9dc78","Type":"ContainerStarted","Data":"27058816c5b0f1e08873805991c4c60e645930a52858b50fbcf44e8cd21dad6f"} Feb 20 00:27:51 crc kubenswrapper[5107]: I0220 00:27:51.977569 5107 generic.go:358] "Generic (PLEG): container finished" podID="55de4a7e-258c-44be-ae15-b6a2acebfd00" containerID="908ce36618f5d576fec04c878aeb0a3eceacf444ed66446dc10997bb8730242b" exitCode=0 Feb 20 00:27:51 crc kubenswrapper[5107]: I0220 00:27:51.978804 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"55de4a7e-258c-44be-ae15-b6a2acebfd00","Type":"ContainerDied","Data":"908ce36618f5d576fec04c878aeb0a3eceacf444ed66446dc10997bb8730242b"} Feb 20 00:27:53 crc kubenswrapper[5107]: I0220 00:27:53.247373 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Feb 20 00:27:53 crc kubenswrapper[5107]: I0220 00:27:53.330184 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/55de4a7e-258c-44be-ae15-b6a2acebfd00-builder-dockercfg-56v9d-pull\") pod \"55de4a7e-258c-44be-ae15-b6a2acebfd00\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " Feb 20 00:27:53 crc kubenswrapper[5107]: I0220 00:27:53.330265 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55de4a7e-258c-44be-ae15-b6a2acebfd00-build-ca-bundles\") pod \"55de4a7e-258c-44be-ae15-b6a2acebfd00\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " Feb 20 00:27:53 crc kubenswrapper[5107]: I0220 00:27:53.330368 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/55de4a7e-258c-44be-ae15-b6a2acebfd00-build-blob-cache\") pod \"55de4a7e-258c-44be-ae15-b6a2acebfd00\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " Feb 20 00:27:53 crc kubenswrapper[5107]: I0220 00:27:53.330487 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dq26g\" (UniqueName: \"kubernetes.io/projected/55de4a7e-258c-44be-ae15-b6a2acebfd00-kube-api-access-dq26g\") pod \"55de4a7e-258c-44be-ae15-b6a2acebfd00\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " Feb 20 00:27:53 crc kubenswrapper[5107]: I0220 00:27:53.330546 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/55de4a7e-258c-44be-ae15-b6a2acebfd00-buildworkdir\") pod \"55de4a7e-258c-44be-ae15-b6a2acebfd00\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " Feb 20 00:27:53 crc kubenswrapper[5107]: I0220 00:27:53.330598 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/55de4a7e-258c-44be-ae15-b6a2acebfd00-node-pullsecrets\") pod \"55de4a7e-258c-44be-ae15-b6a2acebfd00\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " Feb 20 00:27:53 crc kubenswrapper[5107]: I0220 00:27:53.330673 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/55de4a7e-258c-44be-ae15-b6a2acebfd00-container-storage-run\") pod \"55de4a7e-258c-44be-ae15-b6a2acebfd00\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " Feb 20 00:27:53 crc kubenswrapper[5107]: I0220 00:27:53.330867 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/55de4a7e-258c-44be-ae15-b6a2acebfd00-build-system-configs\") pod \"55de4a7e-258c-44be-ae15-b6a2acebfd00\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " Feb 20 00:27:53 crc kubenswrapper[5107]: I0220 00:27:53.330979 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/55de4a7e-258c-44be-ae15-b6a2acebfd00-buildcachedir\") pod \"55de4a7e-258c-44be-ae15-b6a2acebfd00\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " Feb 20 00:27:53 crc kubenswrapper[5107]: I0220 00:27:53.331036 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/55de4a7e-258c-44be-ae15-b6a2acebfd00-container-storage-root\") pod \"55de4a7e-258c-44be-ae15-b6a2acebfd00\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " Feb 20 00:27:53 crc kubenswrapper[5107]: I0220 00:27:53.331086 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/55de4a7e-258c-44be-ae15-b6a2acebfd00-builder-dockercfg-56v9d-push\") pod \"55de4a7e-258c-44be-ae15-b6a2acebfd00\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " Feb 20 00:27:53 crc kubenswrapper[5107]: I0220 00:27:53.331158 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55de4a7e-258c-44be-ae15-b6a2acebfd00-build-proxy-ca-bundles\") pod \"55de4a7e-258c-44be-ae15-b6a2acebfd00\" (UID: \"55de4a7e-258c-44be-ae15-b6a2acebfd00\") " Feb 20 00:27:53 crc kubenswrapper[5107]: I0220 00:27:53.331555 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55de4a7e-258c-44be-ae15-b6a2acebfd00-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "55de4a7e-258c-44be-ae15-b6a2acebfd00" (UID: "55de4a7e-258c-44be-ae15-b6a2acebfd00"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:27:53 crc kubenswrapper[5107]: I0220 00:27:53.331602 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55de4a7e-258c-44be-ae15-b6a2acebfd00-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "55de4a7e-258c-44be-ae15-b6a2acebfd00" (UID: "55de4a7e-258c-44be-ae15-b6a2acebfd00"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:27:53 crc kubenswrapper[5107]: I0220 00:27:53.331800 5107 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/55de4a7e-258c-44be-ae15-b6a2acebfd00-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 20 00:27:53 crc kubenswrapper[5107]: I0220 00:27:53.331824 5107 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/55de4a7e-258c-44be-ae15-b6a2acebfd00-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 20 00:27:53 crc kubenswrapper[5107]: I0220 00:27:53.332446 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55de4a7e-258c-44be-ae15-b6a2acebfd00-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "55de4a7e-258c-44be-ae15-b6a2acebfd00" (UID: "55de4a7e-258c-44be-ae15-b6a2acebfd00"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:27:53 crc kubenswrapper[5107]: I0220 00:27:53.332851 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55de4a7e-258c-44be-ae15-b6a2acebfd00-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "55de4a7e-258c-44be-ae15-b6a2acebfd00" (UID: "55de4a7e-258c-44be-ae15-b6a2acebfd00"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:27:53 crc kubenswrapper[5107]: I0220 00:27:53.333213 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55de4a7e-258c-44be-ae15-b6a2acebfd00-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "55de4a7e-258c-44be-ae15-b6a2acebfd00" (UID: "55de4a7e-258c-44be-ae15-b6a2acebfd00"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:27:53 crc kubenswrapper[5107]: I0220 00:27:53.338069 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55de4a7e-258c-44be-ae15-b6a2acebfd00-kube-api-access-dq26g" (OuterVolumeSpecName: "kube-api-access-dq26g") pod "55de4a7e-258c-44be-ae15-b6a2acebfd00" (UID: "55de4a7e-258c-44be-ae15-b6a2acebfd00"). InnerVolumeSpecName "kube-api-access-dq26g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:27:53 crc kubenswrapper[5107]: I0220 00:27:53.338299 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55de4a7e-258c-44be-ae15-b6a2acebfd00-builder-dockercfg-56v9d-pull" (OuterVolumeSpecName: "builder-dockercfg-56v9d-pull") pod "55de4a7e-258c-44be-ae15-b6a2acebfd00" (UID: "55de4a7e-258c-44be-ae15-b6a2acebfd00"). InnerVolumeSpecName "builder-dockercfg-56v9d-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:27:53 crc kubenswrapper[5107]: I0220 00:27:53.339148 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55de4a7e-258c-44be-ae15-b6a2acebfd00-builder-dockercfg-56v9d-push" (OuterVolumeSpecName: "builder-dockercfg-56v9d-push") pod "55de4a7e-258c-44be-ae15-b6a2acebfd00" (UID: "55de4a7e-258c-44be-ae15-b6a2acebfd00"). InnerVolumeSpecName "builder-dockercfg-56v9d-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:27:53 crc kubenswrapper[5107]: I0220 00:27:53.339358 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55de4a7e-258c-44be-ae15-b6a2acebfd00-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "55de4a7e-258c-44be-ae15-b6a2acebfd00" (UID: "55de4a7e-258c-44be-ae15-b6a2acebfd00"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:27:53 crc kubenswrapper[5107]: I0220 00:27:53.343005 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55de4a7e-258c-44be-ae15-b6a2acebfd00-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "55de4a7e-258c-44be-ae15-b6a2acebfd00" (UID: "55de4a7e-258c-44be-ae15-b6a2acebfd00"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:27:53 crc kubenswrapper[5107]: I0220 00:27:53.432765 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/55de4a7e-258c-44be-ae15-b6a2acebfd00-builder-dockercfg-56v9d-push\") on node \"crc\" DevicePath \"\"" Feb 20 00:27:53 crc kubenswrapper[5107]: I0220 00:27:53.432806 5107 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55de4a7e-258c-44be-ae15-b6a2acebfd00-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 20 00:27:53 crc kubenswrapper[5107]: I0220 00:27:53.432820 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/55de4a7e-258c-44be-ae15-b6a2acebfd00-builder-dockercfg-56v9d-pull\") on node \"crc\" DevicePath \"\"" Feb 20 00:27:53 crc kubenswrapper[5107]: I0220 00:27:53.432834 5107 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55de4a7e-258c-44be-ae15-b6a2acebfd00-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 20 00:27:53 crc kubenswrapper[5107]: I0220 00:27:53.432846 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dq26g\" (UniqueName: \"kubernetes.io/projected/55de4a7e-258c-44be-ae15-b6a2acebfd00-kube-api-access-dq26g\") on node \"crc\" DevicePath \"\"" Feb 20 00:27:53 crc kubenswrapper[5107]: I0220 00:27:53.432857 5107 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/55de4a7e-258c-44be-ae15-b6a2acebfd00-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 20 00:27:53 crc kubenswrapper[5107]: I0220 00:27:53.432869 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/55de4a7e-258c-44be-ae15-b6a2acebfd00-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 20 00:27:53 crc kubenswrapper[5107]: I0220 00:27:53.432879 5107 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/55de4a7e-258c-44be-ae15-b6a2acebfd00-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 20 00:27:53 crc kubenswrapper[5107]: I0220 00:27:53.679358 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55de4a7e-258c-44be-ae15-b6a2acebfd00-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "55de4a7e-258c-44be-ae15-b6a2acebfd00" (UID: "55de4a7e-258c-44be-ae15-b6a2acebfd00"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:27:53 crc kubenswrapper[5107]: I0220 00:27:53.740017 5107 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/55de4a7e-258c-44be-ae15-b6a2acebfd00-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 20 00:27:53 crc kubenswrapper[5107]: I0220 00:27:53.996428 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"55de4a7e-258c-44be-ae15-b6a2acebfd00","Type":"ContainerDied","Data":"34c6167762d58ed322c335d345a1d56e72d625d38723533358b1782d5d60728b"} Feb 20 00:27:53 crc kubenswrapper[5107]: I0220 00:27:53.996529 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34c6167762d58ed322c335d345a1d56e72d625d38723533358b1782d5d60728b" Feb 20 00:27:53 crc kubenswrapper[5107]: I0220 00:27:53.996565 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Feb 20 00:27:56 crc kubenswrapper[5107]: I0220 00:27:56.728646 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55de4a7e-258c-44be-ae15-b6a2acebfd00-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "55de4a7e-258c-44be-ae15-b6a2acebfd00" (UID: "55de4a7e-258c-44be-ae15-b6a2acebfd00"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:27:56 crc kubenswrapper[5107]: I0220 00:27:56.788747 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/55de4a7e-258c-44be-ae15-b6a2acebfd00-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.560861 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-1-build"] Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.561912 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="55de4a7e-258c-44be-ae15-b6a2acebfd00" containerName="git-clone" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.561945 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="55de4a7e-258c-44be-ae15-b6a2acebfd00" containerName="git-clone" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.561960 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1275ee6b-e46c-4c79-8c4f-e326dbab2b6b" containerName="oc" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.561972 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="1275ee6b-e46c-4c79-8c4f-e326dbab2b6b" containerName="oc" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.562013 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="55de4a7e-258c-44be-ae15-b6a2acebfd00" containerName="docker-build" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.562026 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="55de4a7e-258c-44be-ae15-b6a2acebfd00" containerName="docker-build" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.562070 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="55de4a7e-258c-44be-ae15-b6a2acebfd00" containerName="manage-dockerfile" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.562082 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="55de4a7e-258c-44be-ae15-b6a2acebfd00" containerName="manage-dockerfile" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.562302 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="55de4a7e-258c-44be-ae15-b6a2acebfd00" containerName="docker-build" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.562327 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="1275ee6b-e46c-4c79-8c4f-e326dbab2b6b" containerName="oc" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.673764 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.673904 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.676954 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-1-global-ca\"" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.677550 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-56v9d\"" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.678070 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-1-sys-config\"" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.678623 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-1-ca\"" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.804723 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/caeb7c10-b57e-4874-943e-74befde94cfc-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " pod="service-telemetry/sg-bridge-1-build" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.805391 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/caeb7c10-b57e-4874-943e-74befde94cfc-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " pod="service-telemetry/sg-bridge-1-build" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.805461 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/caeb7c10-b57e-4874-943e-74befde94cfc-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " pod="service-telemetry/sg-bridge-1-build" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.805531 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgnv2\" (UniqueName: \"kubernetes.io/projected/caeb7c10-b57e-4874-943e-74befde94cfc-kube-api-access-rgnv2\") pod \"sg-bridge-1-build\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " pod="service-telemetry/sg-bridge-1-build" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.805694 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/caeb7c10-b57e-4874-943e-74befde94cfc-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " pod="service-telemetry/sg-bridge-1-build" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.805742 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/caeb7c10-b57e-4874-943e-74befde94cfc-builder-dockercfg-56v9d-push\") pod \"sg-bridge-1-build\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " pod="service-telemetry/sg-bridge-1-build" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.805790 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/caeb7c10-b57e-4874-943e-74befde94cfc-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " pod="service-telemetry/sg-bridge-1-build" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.805834 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/caeb7c10-b57e-4874-943e-74befde94cfc-builder-dockercfg-56v9d-pull\") pod \"sg-bridge-1-build\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " pod="service-telemetry/sg-bridge-1-build" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.805919 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/caeb7c10-b57e-4874-943e-74befde94cfc-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " pod="service-telemetry/sg-bridge-1-build" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.805962 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/caeb7c10-b57e-4874-943e-74befde94cfc-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " pod="service-telemetry/sg-bridge-1-build" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.805997 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/caeb7c10-b57e-4874-943e-74befde94cfc-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " pod="service-telemetry/sg-bridge-1-build" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.806031 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/caeb7c10-b57e-4874-943e-74befde94cfc-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " pod="service-telemetry/sg-bridge-1-build" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.907713 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/caeb7c10-b57e-4874-943e-74befde94cfc-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " pod="service-telemetry/sg-bridge-1-build" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.907768 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/caeb7c10-b57e-4874-943e-74befde94cfc-builder-dockercfg-56v9d-pull\") pod \"sg-bridge-1-build\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " pod="service-telemetry/sg-bridge-1-build" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.907811 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/caeb7c10-b57e-4874-943e-74befde94cfc-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " pod="service-telemetry/sg-bridge-1-build" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.907839 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/caeb7c10-b57e-4874-943e-74befde94cfc-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " pod="service-telemetry/sg-bridge-1-build" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.908064 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/caeb7c10-b57e-4874-943e-74befde94cfc-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " pod="service-telemetry/sg-bridge-1-build" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.908175 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/caeb7c10-b57e-4874-943e-74befde94cfc-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " pod="service-telemetry/sg-bridge-1-build" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.908248 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/caeb7c10-b57e-4874-943e-74befde94cfc-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " pod="service-telemetry/sg-bridge-1-build" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.908286 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/caeb7c10-b57e-4874-943e-74befde94cfc-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " pod="service-telemetry/sg-bridge-1-build" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.908351 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/caeb7c10-b57e-4874-943e-74befde94cfc-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " pod="service-telemetry/sg-bridge-1-build" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.908437 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/caeb7c10-b57e-4874-943e-74befde94cfc-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " pod="service-telemetry/sg-bridge-1-build" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.908487 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/caeb7c10-b57e-4874-943e-74befde94cfc-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " pod="service-telemetry/sg-bridge-1-build" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.908501 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rgnv2\" (UniqueName: \"kubernetes.io/projected/caeb7c10-b57e-4874-943e-74befde94cfc-kube-api-access-rgnv2\") pod \"sg-bridge-1-build\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " pod="service-telemetry/sg-bridge-1-build" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.908514 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/caeb7c10-b57e-4874-943e-74befde94cfc-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " pod="service-telemetry/sg-bridge-1-build" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.908647 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/caeb7c10-b57e-4874-943e-74befde94cfc-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " pod="service-telemetry/sg-bridge-1-build" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.908750 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/caeb7c10-b57e-4874-943e-74befde94cfc-builder-dockercfg-56v9d-push\") pod \"sg-bridge-1-build\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " pod="service-telemetry/sg-bridge-1-build" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.908981 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/caeb7c10-b57e-4874-943e-74befde94cfc-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " pod="service-telemetry/sg-bridge-1-build" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.909132 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/caeb7c10-b57e-4874-943e-74befde94cfc-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " pod="service-telemetry/sg-bridge-1-build" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.909356 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/caeb7c10-b57e-4874-943e-74befde94cfc-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " pod="service-telemetry/sg-bridge-1-build" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.909392 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/caeb7c10-b57e-4874-943e-74befde94cfc-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " pod="service-telemetry/sg-bridge-1-build" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.909664 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/caeb7c10-b57e-4874-943e-74befde94cfc-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " pod="service-telemetry/sg-bridge-1-build" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.910411 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/caeb7c10-b57e-4874-943e-74befde94cfc-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " pod="service-telemetry/sg-bridge-1-build" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.921297 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/caeb7c10-b57e-4874-943e-74befde94cfc-builder-dockercfg-56v9d-pull\") pod \"sg-bridge-1-build\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " pod="service-telemetry/sg-bridge-1-build" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.924578 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/caeb7c10-b57e-4874-943e-74befde94cfc-builder-dockercfg-56v9d-push\") pod \"sg-bridge-1-build\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " pod="service-telemetry/sg-bridge-1-build" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.937537 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgnv2\" (UniqueName: \"kubernetes.io/projected/caeb7c10-b57e-4874-943e-74befde94cfc-kube-api-access-rgnv2\") pod \"sg-bridge-1-build\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " pod="service-telemetry/sg-bridge-1-build" Feb 20 00:27:57 crc kubenswrapper[5107]: I0220 00:27:57.994642 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Feb 20 00:27:58 crc kubenswrapper[5107]: I0220 00:27:58.213447 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Feb 20 00:27:59 crc kubenswrapper[5107]: I0220 00:27:59.053290 5107 generic.go:358] "Generic (PLEG): container finished" podID="caeb7c10-b57e-4874-943e-74befde94cfc" containerID="8cc73df66c97c361561ea723ae3e4af4b49e5f44171df4e3329914acac375a3f" exitCode=0 Feb 20 00:27:59 crc kubenswrapper[5107]: I0220 00:27:59.053664 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"caeb7c10-b57e-4874-943e-74befde94cfc","Type":"ContainerDied","Data":"8cc73df66c97c361561ea723ae3e4af4b49e5f44171df4e3329914acac375a3f"} Feb 20 00:27:59 crc kubenswrapper[5107]: I0220 00:27:59.053686 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"caeb7c10-b57e-4874-943e-74befde94cfc","Type":"ContainerStarted","Data":"ac5be3138af402135d181794bc81af231bfe9c8d94c36503cbc114b262ccb028"} Feb 20 00:28:00 crc kubenswrapper[5107]: I0220 00:28:00.069321 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"caeb7c10-b57e-4874-943e-74befde94cfc","Type":"ContainerStarted","Data":"f9be35e3ce270f23a7b1092105ab721d7c2ae327db8d7fe38c165d3ed8160efb"} Feb 20 00:28:00 crc kubenswrapper[5107]: I0220 00:28:00.103866 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-1-build" podStartSLOduration=3.103845148 podStartE2EDuration="3.103845148s" podCreationTimestamp="2026-02-20 00:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:28:00.098484068 +0000 UTC m=+1166.467141654" watchObservedRunningTime="2026-02-20 00:28:00.103845148 +0000 UTC m=+1166.472502734" Feb 20 00:28:00 crc kubenswrapper[5107]: I0220 00:28:00.144188 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29525788-7xvgj"] Feb 20 00:28:00 crc kubenswrapper[5107]: I0220 00:28:00.148083 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525788-7xvgj" Feb 20 00:28:00 crc kubenswrapper[5107]: I0220 00:28:00.150293 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 20 00:28:00 crc kubenswrapper[5107]: I0220 00:28:00.150320 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 20 00:28:00 crc kubenswrapper[5107]: I0220 00:28:00.150881 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-km7dp\"" Feb 20 00:28:00 crc kubenswrapper[5107]: I0220 00:28:00.156309 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525788-7xvgj"] Feb 20 00:28:00 crc kubenswrapper[5107]: I0220 00:28:00.267290 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfc28\" (UniqueName: \"kubernetes.io/projected/863755a4-5fa7-4005-9d6b-aa969fe9e5a6-kube-api-access-hfc28\") pod \"auto-csr-approver-29525788-7xvgj\" (UID: \"863755a4-5fa7-4005-9d6b-aa969fe9e5a6\") " pod="openshift-infra/auto-csr-approver-29525788-7xvgj" Feb 20 00:28:00 crc kubenswrapper[5107]: I0220 00:28:00.368390 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hfc28\" (UniqueName: \"kubernetes.io/projected/863755a4-5fa7-4005-9d6b-aa969fe9e5a6-kube-api-access-hfc28\") pod \"auto-csr-approver-29525788-7xvgj\" (UID: \"863755a4-5fa7-4005-9d6b-aa969fe9e5a6\") " pod="openshift-infra/auto-csr-approver-29525788-7xvgj" Feb 20 00:28:00 crc kubenswrapper[5107]: I0220 00:28:00.392168 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfc28\" (UniqueName: \"kubernetes.io/projected/863755a4-5fa7-4005-9d6b-aa969fe9e5a6-kube-api-access-hfc28\") pod \"auto-csr-approver-29525788-7xvgj\" (UID: \"863755a4-5fa7-4005-9d6b-aa969fe9e5a6\") " pod="openshift-infra/auto-csr-approver-29525788-7xvgj" Feb 20 00:28:00 crc kubenswrapper[5107]: I0220 00:28:00.492954 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525788-7xvgj" Feb 20 00:28:00 crc kubenswrapper[5107]: I0220 00:28:00.696391 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525788-7xvgj"] Feb 20 00:28:00 crc kubenswrapper[5107]: W0220 00:28:00.703613 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod863755a4_5fa7_4005_9d6b_aa969fe9e5a6.slice/crio-db703960861da1854f7690f9ed4db32b68bea81889ef48ae4e0c4f92b79f7516 WatchSource:0}: Error finding container db703960861da1854f7690f9ed4db32b68bea81889ef48ae4e0c4f92b79f7516: Status 404 returned error can't find the container with id db703960861da1854f7690f9ed4db32b68bea81889ef48ae4e0c4f92b79f7516 Feb 20 00:28:01 crc kubenswrapper[5107]: I0220 00:28:01.080949 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525788-7xvgj" event={"ID":"863755a4-5fa7-4005-9d6b-aa969fe9e5a6","Type":"ContainerStarted","Data":"db703960861da1854f7690f9ed4db32b68bea81889ef48ae4e0c4f92b79f7516"} Feb 20 00:28:02 crc kubenswrapper[5107]: I0220 00:28:02.089413 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525788-7xvgj" event={"ID":"863755a4-5fa7-4005-9d6b-aa969fe9e5a6","Type":"ContainerStarted","Data":"0ae97859dc5da33b25e25cbcf9b98377baca5a6f4450291f5ed9e06c0f34ddc2"} Feb 20 00:28:02 crc kubenswrapper[5107]: I0220 00:28:02.119060 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29525788-7xvgj" podStartSLOduration=1.128601671 podStartE2EDuration="2.119028067s" podCreationTimestamp="2026-02-20 00:28:00 +0000 UTC" firstStartedPulling="2026-02-20 00:28:00.706432989 +0000 UTC m=+1167.075090555" lastFinishedPulling="2026-02-20 00:28:01.696859385 +0000 UTC m=+1168.065516951" observedRunningTime="2026-02-20 00:28:02.112224106 +0000 UTC m=+1168.480881702" watchObservedRunningTime="2026-02-20 00:28:02.119028067 +0000 UTC m=+1168.487685673" Feb 20 00:28:03 crc kubenswrapper[5107]: I0220 00:28:03.100130 5107 generic.go:358] "Generic (PLEG): container finished" podID="863755a4-5fa7-4005-9d6b-aa969fe9e5a6" containerID="0ae97859dc5da33b25e25cbcf9b98377baca5a6f4450291f5ed9e06c0f34ddc2" exitCode=0 Feb 20 00:28:03 crc kubenswrapper[5107]: I0220 00:28:03.100325 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525788-7xvgj" event={"ID":"863755a4-5fa7-4005-9d6b-aa969fe9e5a6","Type":"ContainerDied","Data":"0ae97859dc5da33b25e25cbcf9b98377baca5a6f4450291f5ed9e06c0f34ddc2"} Feb 20 00:28:04 crc kubenswrapper[5107]: I0220 00:28:04.421436 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525788-7xvgj" Feb 20 00:28:04 crc kubenswrapper[5107]: I0220 00:28:04.558457 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfc28\" (UniqueName: \"kubernetes.io/projected/863755a4-5fa7-4005-9d6b-aa969fe9e5a6-kube-api-access-hfc28\") pod \"863755a4-5fa7-4005-9d6b-aa969fe9e5a6\" (UID: \"863755a4-5fa7-4005-9d6b-aa969fe9e5a6\") " Feb 20 00:28:04 crc kubenswrapper[5107]: I0220 00:28:04.567794 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/863755a4-5fa7-4005-9d6b-aa969fe9e5a6-kube-api-access-hfc28" (OuterVolumeSpecName: "kube-api-access-hfc28") pod "863755a4-5fa7-4005-9d6b-aa969fe9e5a6" (UID: "863755a4-5fa7-4005-9d6b-aa969fe9e5a6"). InnerVolumeSpecName "kube-api-access-hfc28". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:28:04 crc kubenswrapper[5107]: I0220 00:28:04.661218 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hfc28\" (UniqueName: \"kubernetes.io/projected/863755a4-5fa7-4005-9d6b-aa969fe9e5a6-kube-api-access-hfc28\") on node \"crc\" DevicePath \"\"" Feb 20 00:28:05 crc kubenswrapper[5107]: I0220 00:28:05.120543 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525788-7xvgj" event={"ID":"863755a4-5fa7-4005-9d6b-aa969fe9e5a6","Type":"ContainerDied","Data":"db703960861da1854f7690f9ed4db32b68bea81889ef48ae4e0c4f92b79f7516"} Feb 20 00:28:05 crc kubenswrapper[5107]: I0220 00:28:05.120565 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525788-7xvgj" Feb 20 00:28:05 crc kubenswrapper[5107]: I0220 00:28:05.120590 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db703960861da1854f7690f9ed4db32b68bea81889ef48ae4e0c4f92b79f7516" Feb 20 00:28:05 crc kubenswrapper[5107]: I0220 00:28:05.198223 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29525782-9877z"] Feb 20 00:28:05 crc kubenswrapper[5107]: I0220 00:28:05.204875 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29525782-9877z"] Feb 20 00:28:06 crc kubenswrapper[5107]: I0220 00:28:06.497785 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15825612-2f1d-4906-839c-0828fdacf8ea" path="/var/lib/kubelet/pods/15825612-2f1d-4906-839c-0828fdacf8ea/volumes" Feb 20 00:28:07 crc kubenswrapper[5107]: I0220 00:28:07.941663 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Feb 20 00:28:07 crc kubenswrapper[5107]: I0220 00:28:07.942108 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/sg-bridge-1-build" podUID="caeb7c10-b57e-4874-943e-74befde94cfc" containerName="docker-build" containerID="cri-o://f9be35e3ce270f23a7b1092105ab721d7c2ae327db8d7fe38c165d3ed8160efb" gracePeriod=30 Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.149323 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_caeb7c10-b57e-4874-943e-74befde94cfc/docker-build/0.log" Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.150057 5107 generic.go:358] "Generic (PLEG): container finished" podID="caeb7c10-b57e-4874-943e-74befde94cfc" containerID="f9be35e3ce270f23a7b1092105ab721d7c2ae327db8d7fe38c165d3ed8160efb" exitCode=1 Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.150213 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"caeb7c10-b57e-4874-943e-74befde94cfc","Type":"ContainerDied","Data":"f9be35e3ce270f23a7b1092105ab721d7c2ae327db8d7fe38c165d3ed8160efb"} Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.324545 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_caeb7c10-b57e-4874-943e-74befde94cfc/docker-build/0.log" Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.324999 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.421601 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/caeb7c10-b57e-4874-943e-74befde94cfc-buildworkdir\") pod \"caeb7c10-b57e-4874-943e-74befde94cfc\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.422314 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/caeb7c10-b57e-4874-943e-74befde94cfc-buildcachedir\") pod \"caeb7c10-b57e-4874-943e-74befde94cfc\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.422371 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/caeb7c10-b57e-4874-943e-74befde94cfc-container-storage-run\") pod \"caeb7c10-b57e-4874-943e-74befde94cfc\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.422407 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/caeb7c10-b57e-4874-943e-74befde94cfc-build-blob-cache\") pod \"caeb7c10-b57e-4874-943e-74befde94cfc\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.422397 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/caeb7c10-b57e-4874-943e-74befde94cfc-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "caeb7c10-b57e-4874-943e-74befde94cfc" (UID: "caeb7c10-b57e-4874-943e-74befde94cfc"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.422441 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/caeb7c10-b57e-4874-943e-74befde94cfc-builder-dockercfg-56v9d-pull\") pod \"caeb7c10-b57e-4874-943e-74befde94cfc\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.422494 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/caeb7c10-b57e-4874-943e-74befde94cfc-build-proxy-ca-bundles\") pod \"caeb7c10-b57e-4874-943e-74befde94cfc\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.422519 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/caeb7c10-b57e-4874-943e-74befde94cfc-build-ca-bundles\") pod \"caeb7c10-b57e-4874-943e-74befde94cfc\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.422548 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/caeb7c10-b57e-4874-943e-74befde94cfc-container-storage-root\") pod \"caeb7c10-b57e-4874-943e-74befde94cfc\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.422569 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgnv2\" (UniqueName: \"kubernetes.io/projected/caeb7c10-b57e-4874-943e-74befde94cfc-kube-api-access-rgnv2\") pod \"caeb7c10-b57e-4874-943e-74befde94cfc\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.422594 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/caeb7c10-b57e-4874-943e-74befde94cfc-builder-dockercfg-56v9d-push\") pod \"caeb7c10-b57e-4874-943e-74befde94cfc\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.422616 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/caeb7c10-b57e-4874-943e-74befde94cfc-node-pullsecrets\") pod \"caeb7c10-b57e-4874-943e-74befde94cfc\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.422647 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/caeb7c10-b57e-4874-943e-74befde94cfc-build-system-configs\") pod \"caeb7c10-b57e-4874-943e-74befde94cfc\" (UID: \"caeb7c10-b57e-4874-943e-74befde94cfc\") " Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.422932 5107 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/caeb7c10-b57e-4874-943e-74befde94cfc-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.423275 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/caeb7c10-b57e-4874-943e-74befde94cfc-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "caeb7c10-b57e-4874-943e-74befde94cfc" (UID: "caeb7c10-b57e-4874-943e-74befde94cfc"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.423551 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/caeb7c10-b57e-4874-943e-74befde94cfc-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "caeb7c10-b57e-4874-943e-74befde94cfc" (UID: "caeb7c10-b57e-4874-943e-74befde94cfc"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.423637 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/caeb7c10-b57e-4874-943e-74befde94cfc-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "caeb7c10-b57e-4874-943e-74befde94cfc" (UID: "caeb7c10-b57e-4874-943e-74befde94cfc"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.423939 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caeb7c10-b57e-4874-943e-74befde94cfc-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "caeb7c10-b57e-4874-943e-74befde94cfc" (UID: "caeb7c10-b57e-4874-943e-74befde94cfc"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.423954 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caeb7c10-b57e-4874-943e-74befde94cfc-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "caeb7c10-b57e-4874-943e-74befde94cfc" (UID: "caeb7c10-b57e-4874-943e-74befde94cfc"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.425298 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caeb7c10-b57e-4874-943e-74befde94cfc-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "caeb7c10-b57e-4874-943e-74befde94cfc" (UID: "caeb7c10-b57e-4874-943e-74befde94cfc"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.429120 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caeb7c10-b57e-4874-943e-74befde94cfc-builder-dockercfg-56v9d-pull" (OuterVolumeSpecName: "builder-dockercfg-56v9d-pull") pod "caeb7c10-b57e-4874-943e-74befde94cfc" (UID: "caeb7c10-b57e-4874-943e-74befde94cfc"). InnerVolumeSpecName "builder-dockercfg-56v9d-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.429625 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caeb7c10-b57e-4874-943e-74befde94cfc-kube-api-access-rgnv2" (OuterVolumeSpecName: "kube-api-access-rgnv2") pod "caeb7c10-b57e-4874-943e-74befde94cfc" (UID: "caeb7c10-b57e-4874-943e-74befde94cfc"). InnerVolumeSpecName "kube-api-access-rgnv2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.430001 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caeb7c10-b57e-4874-943e-74befde94cfc-builder-dockercfg-56v9d-push" (OuterVolumeSpecName: "builder-dockercfg-56v9d-push") pod "caeb7c10-b57e-4874-943e-74befde94cfc" (UID: "caeb7c10-b57e-4874-943e-74befde94cfc"). InnerVolumeSpecName "builder-dockercfg-56v9d-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.478640 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/caeb7c10-b57e-4874-943e-74befde94cfc-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "caeb7c10-b57e-4874-943e-74befde94cfc" (UID: "caeb7c10-b57e-4874-943e-74befde94cfc"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.524428 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rgnv2\" (UniqueName: \"kubernetes.io/projected/caeb7c10-b57e-4874-943e-74befde94cfc-kube-api-access-rgnv2\") on node \"crc\" DevicePath \"\"" Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.524747 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/caeb7c10-b57e-4874-943e-74befde94cfc-builder-dockercfg-56v9d-push\") on node \"crc\" DevicePath \"\"" Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.524838 5107 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/caeb7c10-b57e-4874-943e-74befde94cfc-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.524920 5107 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/caeb7c10-b57e-4874-943e-74befde94cfc-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.525043 5107 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/caeb7c10-b57e-4874-943e-74befde94cfc-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.525179 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/caeb7c10-b57e-4874-943e-74befde94cfc-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.525284 5107 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/caeb7c10-b57e-4874-943e-74befde94cfc-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.525361 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/caeb7c10-b57e-4874-943e-74befde94cfc-builder-dockercfg-56v9d-pull\") on node \"crc\" DevicePath \"\"" Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.525443 5107 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/caeb7c10-b57e-4874-943e-74befde94cfc-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.525526 5107 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/caeb7c10-b57e-4874-943e-74befde94cfc-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.814650 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/caeb7c10-b57e-4874-943e-74befde94cfc-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "caeb7c10-b57e-4874-943e-74befde94cfc" (UID: "caeb7c10-b57e-4874-943e-74befde94cfc"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:28:08 crc kubenswrapper[5107]: I0220 00:28:08.828595 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/caeb7c10-b57e-4874-943e-74befde94cfc-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.163800 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_caeb7c10-b57e-4874-943e-74befde94cfc/docker-build/0.log" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.164827 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.164844 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"caeb7c10-b57e-4874-943e-74befde94cfc","Type":"ContainerDied","Data":"ac5be3138af402135d181794bc81af231bfe9c8d94c36503cbc114b262ccb028"} Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.164928 5107 scope.go:117] "RemoveContainer" containerID="f9be35e3ce270f23a7b1092105ab721d7c2ae327db8d7fe38c165d3ed8160efb" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.197840 5107 scope.go:117] "RemoveContainer" containerID="8cc73df66c97c361561ea723ae3e4af4b49e5f44171df4e3329914acac375a3f" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.213499 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.225822 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.652362 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-2-build"] Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.654055 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="caeb7c10-b57e-4874-943e-74befde94cfc" containerName="docker-build" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.654096 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="caeb7c10-b57e-4874-943e-74befde94cfc" containerName="docker-build" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.654179 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="caeb7c10-b57e-4874-943e-74befde94cfc" containerName="manage-dockerfile" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.654194 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="caeb7c10-b57e-4874-943e-74befde94cfc" containerName="manage-dockerfile" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.654210 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="863755a4-5fa7-4005-9d6b-aa969fe9e5a6" containerName="oc" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.654223 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="863755a4-5fa7-4005-9d6b-aa969fe9e5a6" containerName="oc" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.654440 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="caeb7c10-b57e-4874-943e-74befde94cfc" containerName="docker-build" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.654471 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="863755a4-5fa7-4005-9d6b-aa969fe9e5a6" containerName="oc" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.720183 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.720347 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.722624 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-2-global-ca\"" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.722702 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-56v9d\"" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.723269 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-2-sys-config\"" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.731784 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-2-ca\"" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.743092 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/382cdca2-7142-4ef1-ac6b-7879df482058-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " pod="service-telemetry/sg-bridge-2-build" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.743325 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/382cdca2-7142-4ef1-ac6b-7879df482058-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " pod="service-telemetry/sg-bridge-2-build" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.743465 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/382cdca2-7142-4ef1-ac6b-7879df482058-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " pod="service-telemetry/sg-bridge-2-build" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.743559 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/382cdca2-7142-4ef1-ac6b-7879df482058-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " pod="service-telemetry/sg-bridge-2-build" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.743617 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/382cdca2-7142-4ef1-ac6b-7879df482058-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " pod="service-telemetry/sg-bridge-2-build" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.743721 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd9lw\" (UniqueName: \"kubernetes.io/projected/382cdca2-7142-4ef1-ac6b-7879df482058-kube-api-access-jd9lw\") pod \"sg-bridge-2-build\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " pod="service-telemetry/sg-bridge-2-build" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.743813 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/382cdca2-7142-4ef1-ac6b-7879df482058-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " pod="service-telemetry/sg-bridge-2-build" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.743929 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/382cdca2-7142-4ef1-ac6b-7879df482058-builder-dockercfg-56v9d-pull\") pod \"sg-bridge-2-build\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " pod="service-telemetry/sg-bridge-2-build" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.743996 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/382cdca2-7142-4ef1-ac6b-7879df482058-builder-dockercfg-56v9d-push\") pod \"sg-bridge-2-build\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " pod="service-telemetry/sg-bridge-2-build" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.744037 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/382cdca2-7142-4ef1-ac6b-7879df482058-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " pod="service-telemetry/sg-bridge-2-build" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.744077 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/382cdca2-7142-4ef1-ac6b-7879df482058-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " pod="service-telemetry/sg-bridge-2-build" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.744183 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/382cdca2-7142-4ef1-ac6b-7879df482058-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " pod="service-telemetry/sg-bridge-2-build" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.846421 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/382cdca2-7142-4ef1-ac6b-7879df482058-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " pod="service-telemetry/sg-bridge-2-build" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.846535 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/382cdca2-7142-4ef1-ac6b-7879df482058-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " pod="service-telemetry/sg-bridge-2-build" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.846594 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/382cdca2-7142-4ef1-ac6b-7879df482058-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " pod="service-telemetry/sg-bridge-2-build" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.846666 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jd9lw\" (UniqueName: \"kubernetes.io/projected/382cdca2-7142-4ef1-ac6b-7879df482058-kube-api-access-jd9lw\") pod \"sg-bridge-2-build\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " pod="service-telemetry/sg-bridge-2-build" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.846724 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/382cdca2-7142-4ef1-ac6b-7879df482058-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " pod="service-telemetry/sg-bridge-2-build" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.846811 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/382cdca2-7142-4ef1-ac6b-7879df482058-builder-dockercfg-56v9d-pull\") pod \"sg-bridge-2-build\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " pod="service-telemetry/sg-bridge-2-build" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.846902 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/382cdca2-7142-4ef1-ac6b-7879df482058-builder-dockercfg-56v9d-push\") pod \"sg-bridge-2-build\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " pod="service-telemetry/sg-bridge-2-build" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.846993 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/382cdca2-7142-4ef1-ac6b-7879df482058-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " pod="service-telemetry/sg-bridge-2-build" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.847046 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/382cdca2-7142-4ef1-ac6b-7879df482058-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " pod="service-telemetry/sg-bridge-2-build" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.847319 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/382cdca2-7142-4ef1-ac6b-7879df482058-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " pod="service-telemetry/sg-bridge-2-build" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.847532 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/382cdca2-7142-4ef1-ac6b-7879df482058-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " pod="service-telemetry/sg-bridge-2-build" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.847667 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/382cdca2-7142-4ef1-ac6b-7879df482058-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " pod="service-telemetry/sg-bridge-2-build" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.848516 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/382cdca2-7142-4ef1-ac6b-7879df482058-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " pod="service-telemetry/sg-bridge-2-build" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.848637 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/382cdca2-7142-4ef1-ac6b-7879df482058-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " pod="service-telemetry/sg-bridge-2-build" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.847757 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/382cdca2-7142-4ef1-ac6b-7879df482058-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " pod="service-telemetry/sg-bridge-2-build" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.848569 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/382cdca2-7142-4ef1-ac6b-7879df482058-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " pod="service-telemetry/sg-bridge-2-build" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.849039 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/382cdca2-7142-4ef1-ac6b-7879df482058-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " pod="service-telemetry/sg-bridge-2-build" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.849268 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/382cdca2-7142-4ef1-ac6b-7879df482058-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " pod="service-telemetry/sg-bridge-2-build" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.849316 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/382cdca2-7142-4ef1-ac6b-7879df482058-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " pod="service-telemetry/sg-bridge-2-build" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.849307 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/382cdca2-7142-4ef1-ac6b-7879df482058-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " pod="service-telemetry/sg-bridge-2-build" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.850051 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/382cdca2-7142-4ef1-ac6b-7879df482058-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " pod="service-telemetry/sg-bridge-2-build" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.853727 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/382cdca2-7142-4ef1-ac6b-7879df482058-builder-dockercfg-56v9d-pull\") pod \"sg-bridge-2-build\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " pod="service-telemetry/sg-bridge-2-build" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.857801 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/382cdca2-7142-4ef1-ac6b-7879df482058-builder-dockercfg-56v9d-push\") pod \"sg-bridge-2-build\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " pod="service-telemetry/sg-bridge-2-build" Feb 20 00:28:09 crc kubenswrapper[5107]: I0220 00:28:09.877263 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd9lw\" (UniqueName: \"kubernetes.io/projected/382cdca2-7142-4ef1-ac6b-7879df482058-kube-api-access-jd9lw\") pod \"sg-bridge-2-build\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " pod="service-telemetry/sg-bridge-2-build" Feb 20 00:28:10 crc kubenswrapper[5107]: I0220 00:28:10.045819 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Feb 20 00:28:10 crc kubenswrapper[5107]: I0220 00:28:10.347786 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Feb 20 00:28:10 crc kubenswrapper[5107]: I0220 00:28:10.493912 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caeb7c10-b57e-4874-943e-74befde94cfc" path="/var/lib/kubelet/pods/caeb7c10-b57e-4874-943e-74befde94cfc/volumes" Feb 20 00:28:11 crc kubenswrapper[5107]: I0220 00:28:11.193624 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"382cdca2-7142-4ef1-ac6b-7879df482058","Type":"ContainerStarted","Data":"f06868340d337c7350af590568b9c836a45cb44bd055537e96ad37bc4c03c875"} Feb 20 00:28:11 crc kubenswrapper[5107]: I0220 00:28:11.194200 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"382cdca2-7142-4ef1-ac6b-7879df482058","Type":"ContainerStarted","Data":"d57008ea7c6d593422dbadc0c96209e0f254498fad198ed279ff3b4cbd3e7a0a"} Feb 20 00:28:12 crc kubenswrapper[5107]: I0220 00:28:12.211336 5107 generic.go:358] "Generic (PLEG): container finished" podID="382cdca2-7142-4ef1-ac6b-7879df482058" containerID="f06868340d337c7350af590568b9c836a45cb44bd055537e96ad37bc4c03c875" exitCode=0 Feb 20 00:28:12 crc kubenswrapper[5107]: I0220 00:28:12.211778 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"382cdca2-7142-4ef1-ac6b-7879df482058","Type":"ContainerDied","Data":"f06868340d337c7350af590568b9c836a45cb44bd055537e96ad37bc4c03c875"} Feb 20 00:28:13 crc kubenswrapper[5107]: I0220 00:28:13.220562 5107 generic.go:358] "Generic (PLEG): container finished" podID="382cdca2-7142-4ef1-ac6b-7879df482058" containerID="a01d394cb3d666ca3b3812be1d20960f39d31d79d89c3b3f57cb4d967ad344e6" exitCode=0 Feb 20 00:28:13 crc kubenswrapper[5107]: I0220 00:28:13.220673 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"382cdca2-7142-4ef1-ac6b-7879df482058","Type":"ContainerDied","Data":"a01d394cb3d666ca3b3812be1d20960f39d31d79d89c3b3f57cb4d967ad344e6"} Feb 20 00:28:13 crc kubenswrapper[5107]: I0220 00:28:13.258560 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_382cdca2-7142-4ef1-ac6b-7879df482058/manage-dockerfile/0.log" Feb 20 00:28:14 crc kubenswrapper[5107]: I0220 00:28:14.235249 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"382cdca2-7142-4ef1-ac6b-7879df482058","Type":"ContainerStarted","Data":"3e411a92e10f5842a6c1b316236c10a645a825991feb7b7900a464dfef20a45f"} Feb 20 00:28:14 crc kubenswrapper[5107]: I0220 00:28:14.277127 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-2-build" podStartSLOduration=5.277097311 podStartE2EDuration="5.277097311s" podCreationTimestamp="2026-02-20 00:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:28:14.274833197 +0000 UTC m=+1180.643490793" watchObservedRunningTime="2026-02-20 00:28:14.277097311 +0000 UTC m=+1180.645754927" Feb 20 00:28:35 crc kubenswrapper[5107]: I0220 00:28:35.057077 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fnskd_c9d08e95-6328-4e97-aab4-4dd9913914cc/kube-multus/0.log" Feb 20 00:28:35 crc kubenswrapper[5107]: I0220 00:28:35.067684 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fnskd_c9d08e95-6328-4e97-aab4-4dd9913914cc/kube-multus/0.log" Feb 20 00:28:35 crc kubenswrapper[5107]: I0220 00:28:35.069801 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 20 00:28:35 crc kubenswrapper[5107]: I0220 00:28:35.077370 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 20 00:28:35 crc kubenswrapper[5107]: I0220 00:28:35.434153 5107 scope.go:117] "RemoveContainer" containerID="a2bbd7bc3a041d5be8fb720f1c213cb3e498aa9bb14058c57edcaf0aad17e082" Feb 20 00:29:01 crc kubenswrapper[5107]: I0220 00:29:01.638807 5107 generic.go:358] "Generic (PLEG): container finished" podID="382cdca2-7142-4ef1-ac6b-7879df482058" containerID="3e411a92e10f5842a6c1b316236c10a645a825991feb7b7900a464dfef20a45f" exitCode=0 Feb 20 00:29:01 crc kubenswrapper[5107]: I0220 00:29:01.639046 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"382cdca2-7142-4ef1-ac6b-7879df482058","Type":"ContainerDied","Data":"3e411a92e10f5842a6c1b316236c10a645a825991feb7b7900a464dfef20a45f"} Feb 20 00:29:02 crc kubenswrapper[5107]: I0220 00:29:02.985182 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Feb 20 00:29:03 crc kubenswrapper[5107]: I0220 00:29:03.068263 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/382cdca2-7142-4ef1-ac6b-7879df482058-builder-dockercfg-56v9d-pull\") pod \"382cdca2-7142-4ef1-ac6b-7879df482058\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " Feb 20 00:29:03 crc kubenswrapper[5107]: I0220 00:29:03.068580 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/382cdca2-7142-4ef1-ac6b-7879df482058-builder-dockercfg-56v9d-push\") pod \"382cdca2-7142-4ef1-ac6b-7879df482058\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " Feb 20 00:29:03 crc kubenswrapper[5107]: I0220 00:29:03.068803 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/382cdca2-7142-4ef1-ac6b-7879df482058-container-storage-run\") pod \"382cdca2-7142-4ef1-ac6b-7879df482058\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " Feb 20 00:29:03 crc kubenswrapper[5107]: I0220 00:29:03.068998 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/382cdca2-7142-4ef1-ac6b-7879df482058-buildworkdir\") pod \"382cdca2-7142-4ef1-ac6b-7879df482058\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " Feb 20 00:29:03 crc kubenswrapper[5107]: I0220 00:29:03.069279 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/382cdca2-7142-4ef1-ac6b-7879df482058-build-system-configs\") pod \"382cdca2-7142-4ef1-ac6b-7879df482058\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " Feb 20 00:29:03 crc kubenswrapper[5107]: I0220 00:29:03.069455 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/382cdca2-7142-4ef1-ac6b-7879df482058-container-storage-root\") pod \"382cdca2-7142-4ef1-ac6b-7879df482058\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " Feb 20 00:29:03 crc kubenswrapper[5107]: I0220 00:29:03.069717 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/382cdca2-7142-4ef1-ac6b-7879df482058-buildcachedir\") pod \"382cdca2-7142-4ef1-ac6b-7879df482058\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " Feb 20 00:29:03 crc kubenswrapper[5107]: I0220 00:29:03.069807 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/382cdca2-7142-4ef1-ac6b-7879df482058-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "382cdca2-7142-4ef1-ac6b-7879df482058" (UID: "382cdca2-7142-4ef1-ac6b-7879df482058"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:29:03 crc kubenswrapper[5107]: I0220 00:29:03.070130 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/382cdca2-7142-4ef1-ac6b-7879df482058-build-ca-bundles\") pod \"382cdca2-7142-4ef1-ac6b-7879df482058\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " Feb 20 00:29:03 crc kubenswrapper[5107]: I0220 00:29:03.070379 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/382cdca2-7142-4ef1-ac6b-7879df482058-node-pullsecrets\") pod \"382cdca2-7142-4ef1-ac6b-7879df482058\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " Feb 20 00:29:03 crc kubenswrapper[5107]: I0220 00:29:03.070572 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jd9lw\" (UniqueName: \"kubernetes.io/projected/382cdca2-7142-4ef1-ac6b-7879df482058-kube-api-access-jd9lw\") pod \"382cdca2-7142-4ef1-ac6b-7879df482058\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " Feb 20 00:29:03 crc kubenswrapper[5107]: I0220 00:29:03.070620 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/382cdca2-7142-4ef1-ac6b-7879df482058-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "382cdca2-7142-4ef1-ac6b-7879df482058" (UID: "382cdca2-7142-4ef1-ac6b-7879df482058"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:29:03 crc kubenswrapper[5107]: I0220 00:29:03.070659 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/382cdca2-7142-4ef1-ac6b-7879df482058-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "382cdca2-7142-4ef1-ac6b-7879df482058" (UID: "382cdca2-7142-4ef1-ac6b-7879df482058"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:29:03 crc kubenswrapper[5107]: I0220 00:29:03.070676 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/382cdca2-7142-4ef1-ac6b-7879df482058-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "382cdca2-7142-4ef1-ac6b-7879df482058" (UID: "382cdca2-7142-4ef1-ac6b-7879df482058"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:29:03 crc kubenswrapper[5107]: I0220 00:29:03.070729 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/382cdca2-7142-4ef1-ac6b-7879df482058-build-proxy-ca-bundles\") pod \"382cdca2-7142-4ef1-ac6b-7879df482058\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " Feb 20 00:29:03 crc kubenswrapper[5107]: I0220 00:29:03.071239 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/382cdca2-7142-4ef1-ac6b-7879df482058-build-blob-cache\") pod \"382cdca2-7142-4ef1-ac6b-7879df482058\" (UID: \"382cdca2-7142-4ef1-ac6b-7879df482058\") " Feb 20 00:29:03 crc kubenswrapper[5107]: I0220 00:29:03.071128 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/382cdca2-7142-4ef1-ac6b-7879df482058-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "382cdca2-7142-4ef1-ac6b-7879df482058" (UID: "382cdca2-7142-4ef1-ac6b-7879df482058"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:29:03 crc kubenswrapper[5107]: I0220 00:29:03.071437 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/382cdca2-7142-4ef1-ac6b-7879df482058-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "382cdca2-7142-4ef1-ac6b-7879df482058" (UID: "382cdca2-7142-4ef1-ac6b-7879df482058"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:29:03 crc kubenswrapper[5107]: I0220 00:29:03.071982 5107 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/382cdca2-7142-4ef1-ac6b-7879df482058-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 20 00:29:03 crc kubenswrapper[5107]: I0220 00:29:03.072128 5107 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/382cdca2-7142-4ef1-ac6b-7879df482058-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 20 00:29:03 crc kubenswrapper[5107]: I0220 00:29:03.072305 5107 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/382cdca2-7142-4ef1-ac6b-7879df482058-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 20 00:29:03 crc kubenswrapper[5107]: I0220 00:29:03.072443 5107 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/382cdca2-7142-4ef1-ac6b-7879df482058-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 20 00:29:03 crc kubenswrapper[5107]: I0220 00:29:03.072576 5107 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/382cdca2-7142-4ef1-ac6b-7879df482058-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 20 00:29:03 crc kubenswrapper[5107]: I0220 00:29:03.072713 5107 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/382cdca2-7142-4ef1-ac6b-7879df482058-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 20 00:29:03 crc kubenswrapper[5107]: I0220 00:29:03.072802 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/382cdca2-7142-4ef1-ac6b-7879df482058-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "382cdca2-7142-4ef1-ac6b-7879df482058" (UID: "382cdca2-7142-4ef1-ac6b-7879df482058"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:29:03 crc kubenswrapper[5107]: I0220 00:29:03.076627 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/382cdca2-7142-4ef1-ac6b-7879df482058-kube-api-access-jd9lw" (OuterVolumeSpecName: "kube-api-access-jd9lw") pod "382cdca2-7142-4ef1-ac6b-7879df482058" (UID: "382cdca2-7142-4ef1-ac6b-7879df482058"). InnerVolumeSpecName "kube-api-access-jd9lw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:29:03 crc kubenswrapper[5107]: I0220 00:29:03.077458 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/382cdca2-7142-4ef1-ac6b-7879df482058-builder-dockercfg-56v9d-push" (OuterVolumeSpecName: "builder-dockercfg-56v9d-push") pod "382cdca2-7142-4ef1-ac6b-7879df482058" (UID: "382cdca2-7142-4ef1-ac6b-7879df482058"). InnerVolumeSpecName "builder-dockercfg-56v9d-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:29:03 crc kubenswrapper[5107]: I0220 00:29:03.078074 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/382cdca2-7142-4ef1-ac6b-7879df482058-builder-dockercfg-56v9d-pull" (OuterVolumeSpecName: "builder-dockercfg-56v9d-pull") pod "382cdca2-7142-4ef1-ac6b-7879df482058" (UID: "382cdca2-7142-4ef1-ac6b-7879df482058"). InnerVolumeSpecName "builder-dockercfg-56v9d-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:29:03 crc kubenswrapper[5107]: I0220 00:29:03.173995 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jd9lw\" (UniqueName: \"kubernetes.io/projected/382cdca2-7142-4ef1-ac6b-7879df482058-kube-api-access-jd9lw\") on node \"crc\" DevicePath \"\"" Feb 20 00:29:03 crc kubenswrapper[5107]: I0220 00:29:03.174032 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/382cdca2-7142-4ef1-ac6b-7879df482058-builder-dockercfg-56v9d-pull\") on node \"crc\" DevicePath \"\"" Feb 20 00:29:03 crc kubenswrapper[5107]: I0220 00:29:03.174042 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/382cdca2-7142-4ef1-ac6b-7879df482058-builder-dockercfg-56v9d-push\") on node \"crc\" DevicePath \"\"" Feb 20 00:29:03 crc kubenswrapper[5107]: I0220 00:29:03.174050 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/382cdca2-7142-4ef1-ac6b-7879df482058-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 20 00:29:03 crc kubenswrapper[5107]: I0220 00:29:03.179983 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/382cdca2-7142-4ef1-ac6b-7879df482058-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "382cdca2-7142-4ef1-ac6b-7879df482058" (UID: "382cdca2-7142-4ef1-ac6b-7879df482058"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:29:03 crc kubenswrapper[5107]: I0220 00:29:03.275209 5107 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/382cdca2-7142-4ef1-ac6b-7879df482058-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 20 00:29:03 crc kubenswrapper[5107]: I0220 00:29:03.662460 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"382cdca2-7142-4ef1-ac6b-7879df482058","Type":"ContainerDied","Data":"d57008ea7c6d593422dbadc0c96209e0f254498fad198ed279ff3b4cbd3e7a0a"} Feb 20 00:29:03 crc kubenswrapper[5107]: I0220 00:29:03.662860 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d57008ea7c6d593422dbadc0c96209e0f254498fad198ed279ff3b4cbd3e7a0a" Feb 20 00:29:03 crc kubenswrapper[5107]: I0220 00:29:03.662579 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Feb 20 00:29:03 crc kubenswrapper[5107]: I0220 00:29:03.978180 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/382cdca2-7142-4ef1-ac6b-7879df482058-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "382cdca2-7142-4ef1-ac6b-7879df482058" (UID: "382cdca2-7142-4ef1-ac6b-7879df482058"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:29:03 crc kubenswrapper[5107]: I0220 00:29:03.987971 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/382cdca2-7142-4ef1-ac6b-7879df482058-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.025856 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.027479 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="382cdca2-7142-4ef1-ac6b-7879df482058" containerName="git-clone" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.027498 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="382cdca2-7142-4ef1-ac6b-7879df482058" containerName="git-clone" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.027513 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="382cdca2-7142-4ef1-ac6b-7879df482058" containerName="docker-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.027519 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="382cdca2-7142-4ef1-ac6b-7879df482058" containerName="docker-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.027536 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="382cdca2-7142-4ef1-ac6b-7879df482058" containerName="manage-dockerfile" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.027547 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="382cdca2-7142-4ef1-ac6b-7879df482058" containerName="manage-dockerfile" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.027732 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="382cdca2-7142-4ef1-ac6b-7879df482058" containerName="docker-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.036847 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.039306 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-1-ca\"" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.039363 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-1-global-ca\"" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.039781 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-56v9d\"" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.043872 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-1-sys-config\"" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.053087 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.139324 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/eb21134f-492a-4861-96c5-1ea2d5074c62-builder-dockercfg-56v9d-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.139412 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/eb21134f-492a-4861-96c5-1ea2d5074c62-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.139453 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb21134f-492a-4861-96c5-1ea2d5074c62-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.139491 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/eb21134f-492a-4861-96c5-1ea2d5074c62-builder-dockercfg-56v9d-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.139530 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/eb21134f-492a-4861-96c5-1ea2d5074c62-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.139606 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/eb21134f-492a-4861-96c5-1ea2d5074c62-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.139635 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/eb21134f-492a-4861-96c5-1ea2d5074c62-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.139663 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/eb21134f-492a-4861-96c5-1ea2d5074c62-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.139689 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/eb21134f-492a-4861-96c5-1ea2d5074c62-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.139751 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/eb21134f-492a-4861-96c5-1ea2d5074c62-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.139835 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb21134f-492a-4861-96c5-1ea2d5074c62-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.139880 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4rzh\" (UniqueName: \"kubernetes.io/projected/eb21134f-492a-4861-96c5-1ea2d5074c62-kube-api-access-m4rzh\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.241491 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/eb21134f-492a-4861-96c5-1ea2d5074c62-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.241735 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb21134f-492a-4861-96c5-1ea2d5074c62-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.241774 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m4rzh\" (UniqueName: \"kubernetes.io/projected/eb21134f-492a-4861-96c5-1ea2d5074c62-kube-api-access-m4rzh\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.241827 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/eb21134f-492a-4861-96c5-1ea2d5074c62-builder-dockercfg-56v9d-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.241856 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/eb21134f-492a-4861-96c5-1ea2d5074c62-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.242209 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/eb21134f-492a-4861-96c5-1ea2d5074c62-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.242997 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb21134f-492a-4861-96c5-1ea2d5074c62-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.243487 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/eb21134f-492a-4861-96c5-1ea2d5074c62-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.243597 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb21134f-492a-4861-96c5-1ea2d5074c62-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.243685 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/eb21134f-492a-4861-96c5-1ea2d5074c62-builder-dockercfg-56v9d-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.243701 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb21134f-492a-4861-96c5-1ea2d5074c62-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.243960 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/eb21134f-492a-4861-96c5-1ea2d5074c62-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.244126 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/eb21134f-492a-4861-96c5-1ea2d5074c62-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.244220 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/eb21134f-492a-4861-96c5-1ea2d5074c62-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.244619 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/eb21134f-492a-4861-96c5-1ea2d5074c62-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.244622 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/eb21134f-492a-4861-96c5-1ea2d5074c62-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.244226 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/eb21134f-492a-4861-96c5-1ea2d5074c62-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.244715 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/eb21134f-492a-4861-96c5-1ea2d5074c62-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.244761 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/eb21134f-492a-4861-96c5-1ea2d5074c62-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.245186 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/eb21134f-492a-4861-96c5-1ea2d5074c62-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.245787 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/eb21134f-492a-4861-96c5-1ea2d5074c62-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.252833 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/eb21134f-492a-4861-96c5-1ea2d5074c62-builder-dockercfg-56v9d-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.252846 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/eb21134f-492a-4861-96c5-1ea2d5074c62-builder-dockercfg-56v9d-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.279690 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4rzh\" (UniqueName: \"kubernetes.io/projected/eb21134f-492a-4861-96c5-1ea2d5074c62-kube-api-access-m4rzh\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.360028 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.676383 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Feb 20 00:29:07 crc kubenswrapper[5107]: I0220 00:29:07.700210 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"eb21134f-492a-4861-96c5-1ea2d5074c62","Type":"ContainerStarted","Data":"ddd241796727fc672d6120b25136630663192d6580081a17257d782ceb6272ab"} Feb 20 00:29:08 crc kubenswrapper[5107]: I0220 00:29:08.709285 5107 generic.go:358] "Generic (PLEG): container finished" podID="eb21134f-492a-4861-96c5-1ea2d5074c62" containerID="ead6cbdd83dab0b07cfe8b5b02acd2131585d7ed38824de80e28d25838424e2d" exitCode=0 Feb 20 00:29:08 crc kubenswrapper[5107]: I0220 00:29:08.709419 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"eb21134f-492a-4861-96c5-1ea2d5074c62","Type":"ContainerDied","Data":"ead6cbdd83dab0b07cfe8b5b02acd2131585d7ed38824de80e28d25838424e2d"} Feb 20 00:29:09 crc kubenswrapper[5107]: I0220 00:29:09.734066 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"eb21134f-492a-4861-96c5-1ea2d5074c62","Type":"ContainerStarted","Data":"d24a8252af31375d955790cfaebd34bfe413df7216b2566248049dceacd03aa6"} Feb 20 00:29:09 crc kubenswrapper[5107]: I0220 00:29:09.760622 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-1-build" podStartSLOduration=2.760593252 podStartE2EDuration="2.760593252s" podCreationTimestamp="2026-02-20 00:29:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:29:09.758230805 +0000 UTC m=+1236.126888421" watchObservedRunningTime="2026-02-20 00:29:09.760593252 +0000 UTC m=+1236.129250858" Feb 20 00:29:17 crc kubenswrapper[5107]: I0220 00:29:17.797608 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Feb 20 00:29:17 crc kubenswrapper[5107]: I0220 00:29:17.798391 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/prometheus-webhook-snmp-1-build" podUID="eb21134f-492a-4861-96c5-1ea2d5074c62" containerName="docker-build" containerID="cri-o://d24a8252af31375d955790cfaebd34bfe413df7216b2566248049dceacd03aa6" gracePeriod=30 Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.301437 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_eb21134f-492a-4861-96c5-1ea2d5074c62/docker-build/0.log" Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.303765 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.413355 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/eb21134f-492a-4861-96c5-1ea2d5074c62-container-storage-run\") pod \"eb21134f-492a-4861-96c5-1ea2d5074c62\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.413430 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4rzh\" (UniqueName: \"kubernetes.io/projected/eb21134f-492a-4861-96c5-1ea2d5074c62-kube-api-access-m4rzh\") pod \"eb21134f-492a-4861-96c5-1ea2d5074c62\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.413463 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/eb21134f-492a-4861-96c5-1ea2d5074c62-build-system-configs\") pod \"eb21134f-492a-4861-96c5-1ea2d5074c62\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.413497 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/eb21134f-492a-4861-96c5-1ea2d5074c62-build-blob-cache\") pod \"eb21134f-492a-4861-96c5-1ea2d5074c62\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.414286 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/eb21134f-492a-4861-96c5-1ea2d5074c62-buildworkdir\") pod \"eb21134f-492a-4861-96c5-1ea2d5074c62\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.414386 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/eb21134f-492a-4861-96c5-1ea2d5074c62-node-pullsecrets\") pod \"eb21134f-492a-4861-96c5-1ea2d5074c62\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.414443 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/eb21134f-492a-4861-96c5-1ea2d5074c62-buildcachedir\") pod \"eb21134f-492a-4861-96c5-1ea2d5074c62\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.414497 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb21134f-492a-4861-96c5-1ea2d5074c62-build-ca-bundles\") pod \"eb21134f-492a-4861-96c5-1ea2d5074c62\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.414526 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/eb21134f-492a-4861-96c5-1ea2d5074c62-builder-dockercfg-56v9d-pull\") pod \"eb21134f-492a-4861-96c5-1ea2d5074c62\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.414558 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb21134f-492a-4861-96c5-1ea2d5074c62-build-proxy-ca-bundles\") pod \"eb21134f-492a-4861-96c5-1ea2d5074c62\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.414593 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/eb21134f-492a-4861-96c5-1ea2d5074c62-builder-dockercfg-56v9d-push\") pod \"eb21134f-492a-4861-96c5-1ea2d5074c62\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.414660 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/eb21134f-492a-4861-96c5-1ea2d5074c62-container-storage-root\") pod \"eb21134f-492a-4861-96c5-1ea2d5074c62\" (UID: \"eb21134f-492a-4861-96c5-1ea2d5074c62\") " Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.414958 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb21134f-492a-4861-96c5-1ea2d5074c62-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "eb21134f-492a-4861-96c5-1ea2d5074c62" (UID: "eb21134f-492a-4861-96c5-1ea2d5074c62"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.415334 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb21134f-492a-4861-96c5-1ea2d5074c62-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "eb21134f-492a-4861-96c5-1ea2d5074c62" (UID: "eb21134f-492a-4861-96c5-1ea2d5074c62"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.415489 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb21134f-492a-4861-96c5-1ea2d5074c62-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "eb21134f-492a-4861-96c5-1ea2d5074c62" (UID: "eb21134f-492a-4861-96c5-1ea2d5074c62"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.416117 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb21134f-492a-4861-96c5-1ea2d5074c62-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "eb21134f-492a-4861-96c5-1ea2d5074c62" (UID: "eb21134f-492a-4861-96c5-1ea2d5074c62"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.416345 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb21134f-492a-4861-96c5-1ea2d5074c62-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "eb21134f-492a-4861-96c5-1ea2d5074c62" (UID: "eb21134f-492a-4861-96c5-1ea2d5074c62"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.416426 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb21134f-492a-4861-96c5-1ea2d5074c62-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "eb21134f-492a-4861-96c5-1ea2d5074c62" (UID: "eb21134f-492a-4861-96c5-1ea2d5074c62"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.416455 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/eb21134f-492a-4861-96c5-1ea2d5074c62-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.416451 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb21134f-492a-4861-96c5-1ea2d5074c62-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "eb21134f-492a-4861-96c5-1ea2d5074c62" (UID: "eb21134f-492a-4861-96c5-1ea2d5074c62"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.416484 5107 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/eb21134f-492a-4861-96c5-1ea2d5074c62-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.416559 5107 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/eb21134f-492a-4861-96c5-1ea2d5074c62-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.416605 5107 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/eb21134f-492a-4861-96c5-1ea2d5074c62-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.422041 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb21134f-492a-4861-96c5-1ea2d5074c62-builder-dockercfg-56v9d-pull" (OuterVolumeSpecName: "builder-dockercfg-56v9d-pull") pod "eb21134f-492a-4861-96c5-1ea2d5074c62" (UID: "eb21134f-492a-4861-96c5-1ea2d5074c62"). InnerVolumeSpecName "builder-dockercfg-56v9d-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.430217 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb21134f-492a-4861-96c5-1ea2d5074c62-builder-dockercfg-56v9d-push" (OuterVolumeSpecName: "builder-dockercfg-56v9d-push") pod "eb21134f-492a-4861-96c5-1ea2d5074c62" (UID: "eb21134f-492a-4861-96c5-1ea2d5074c62"). InnerVolumeSpecName "builder-dockercfg-56v9d-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.430231 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb21134f-492a-4861-96c5-1ea2d5074c62-kube-api-access-m4rzh" (OuterVolumeSpecName: "kube-api-access-m4rzh") pod "eb21134f-492a-4861-96c5-1ea2d5074c62" (UID: "eb21134f-492a-4861-96c5-1ea2d5074c62"). InnerVolumeSpecName "kube-api-access-m4rzh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.480714 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb21134f-492a-4861-96c5-1ea2d5074c62-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "eb21134f-492a-4861-96c5-1ea2d5074c62" (UID: "eb21134f-492a-4861-96c5-1ea2d5074c62"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.518816 5107 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb21134f-492a-4861-96c5-1ea2d5074c62-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.518889 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/eb21134f-492a-4861-96c5-1ea2d5074c62-builder-dockercfg-56v9d-pull\") on node \"crc\" DevicePath \"\"" Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.518918 5107 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb21134f-492a-4861-96c5-1ea2d5074c62-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.518939 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/eb21134f-492a-4861-96c5-1ea2d5074c62-builder-dockercfg-56v9d-push\") on node \"crc\" DevicePath \"\"" Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.518959 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m4rzh\" (UniqueName: \"kubernetes.io/projected/eb21134f-492a-4861-96c5-1ea2d5074c62-kube-api-access-m4rzh\") on node \"crc\" DevicePath \"\"" Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.518976 5107 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/eb21134f-492a-4861-96c5-1ea2d5074c62-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.518992 5107 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/eb21134f-492a-4861-96c5-1ea2d5074c62-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.805278 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_eb21134f-492a-4861-96c5-1ea2d5074c62/docker-build/0.log" Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.805891 5107 generic.go:358] "Generic (PLEG): container finished" podID="eb21134f-492a-4861-96c5-1ea2d5074c62" containerID="d24a8252af31375d955790cfaebd34bfe413df7216b2566248049dceacd03aa6" exitCode=1 Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.805932 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"eb21134f-492a-4861-96c5-1ea2d5074c62","Type":"ContainerDied","Data":"d24a8252af31375d955790cfaebd34bfe413df7216b2566248049dceacd03aa6"} Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.805962 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"eb21134f-492a-4861-96c5-1ea2d5074c62","Type":"ContainerDied","Data":"ddd241796727fc672d6120b25136630663192d6580081a17257d782ceb6272ab"} Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.805980 5107 scope.go:117] "RemoveContainer" containerID="d24a8252af31375d955790cfaebd34bfe413df7216b2566248049dceacd03aa6" Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.806040 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.860369 5107 scope.go:117] "RemoveContainer" containerID="ead6cbdd83dab0b07cfe8b5b02acd2131585d7ed38824de80e28d25838424e2d" Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.892711 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb21134f-492a-4861-96c5-1ea2d5074c62-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "eb21134f-492a-4861-96c5-1ea2d5074c62" (UID: "eb21134f-492a-4861-96c5-1ea2d5074c62"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.925595 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/eb21134f-492a-4861-96c5-1ea2d5074c62-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.956301 5107 scope.go:117] "RemoveContainer" containerID="d24a8252af31375d955790cfaebd34bfe413df7216b2566248049dceacd03aa6" Feb 20 00:29:18 crc kubenswrapper[5107]: E0220 00:29:18.957293 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d24a8252af31375d955790cfaebd34bfe413df7216b2566248049dceacd03aa6\": container with ID starting with d24a8252af31375d955790cfaebd34bfe413df7216b2566248049dceacd03aa6 not found: ID does not exist" containerID="d24a8252af31375d955790cfaebd34bfe413df7216b2566248049dceacd03aa6" Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.957495 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d24a8252af31375d955790cfaebd34bfe413df7216b2566248049dceacd03aa6"} err="failed to get container status \"d24a8252af31375d955790cfaebd34bfe413df7216b2566248049dceacd03aa6\": rpc error: code = NotFound desc = could not find container \"d24a8252af31375d955790cfaebd34bfe413df7216b2566248049dceacd03aa6\": container with ID starting with d24a8252af31375d955790cfaebd34bfe413df7216b2566248049dceacd03aa6 not found: ID does not exist" Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.957659 5107 scope.go:117] "RemoveContainer" containerID="ead6cbdd83dab0b07cfe8b5b02acd2131585d7ed38824de80e28d25838424e2d" Feb 20 00:29:18 crc kubenswrapper[5107]: E0220 00:29:18.958528 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ead6cbdd83dab0b07cfe8b5b02acd2131585d7ed38824de80e28d25838424e2d\": container with ID starting with ead6cbdd83dab0b07cfe8b5b02acd2131585d7ed38824de80e28d25838424e2d not found: ID does not exist" containerID="ead6cbdd83dab0b07cfe8b5b02acd2131585d7ed38824de80e28d25838424e2d" Feb 20 00:29:18 crc kubenswrapper[5107]: I0220 00:29:18.958605 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ead6cbdd83dab0b07cfe8b5b02acd2131585d7ed38824de80e28d25838424e2d"} err="failed to get container status \"ead6cbdd83dab0b07cfe8b5b02acd2131585d7ed38824de80e28d25838424e2d\": rpc error: code = NotFound desc = could not find container \"ead6cbdd83dab0b07cfe8b5b02acd2131585d7ed38824de80e28d25838424e2d\": container with ID starting with ead6cbdd83dab0b07cfe8b5b02acd2131585d7ed38824de80e28d25838424e2d not found: ID does not exist" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.162179 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.170167 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.391296 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.392060 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="eb21134f-492a-4861-96c5-1ea2d5074c62" containerName="manage-dockerfile" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.392078 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb21134f-492a-4861-96c5-1ea2d5074c62" containerName="manage-dockerfile" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.392108 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="eb21134f-492a-4861-96c5-1ea2d5074c62" containerName="docker-build" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.392116 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb21134f-492a-4861-96c5-1ea2d5074c62" containerName="docker-build" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.392303 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="eb21134f-492a-4861-96c5-1ea2d5074c62" containerName="docker-build" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.449732 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.449886 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.451745 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-2-ca\"" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.453273 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-56v9d\"" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.453544 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-2-sys-config\"" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.453708 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-2-global-ca\"" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.535525 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d194899b-248c-4ae4-a9e6-202f7a737447-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.535824 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d194899b-248c-4ae4-a9e6-202f7a737447-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.536088 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d194899b-248c-4ae4-a9e6-202f7a737447-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.536318 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vxc5\" (UniqueName: \"kubernetes.io/projected/d194899b-248c-4ae4-a9e6-202f7a737447-kube-api-access-7vxc5\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.536368 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/d194899b-248c-4ae4-a9e6-202f7a737447-builder-dockercfg-56v9d-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.536417 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d194899b-248c-4ae4-a9e6-202f7a737447-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.536447 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d194899b-248c-4ae4-a9e6-202f7a737447-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.536465 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/d194899b-248c-4ae4-a9e6-202f7a737447-builder-dockercfg-56v9d-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.536504 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d194899b-248c-4ae4-a9e6-202f7a737447-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.536524 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d194899b-248c-4ae4-a9e6-202f7a737447-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.536617 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d194899b-248c-4ae4-a9e6-202f7a737447-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.536651 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d194899b-248c-4ae4-a9e6-202f7a737447-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.638786 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d194899b-248c-4ae4-a9e6-202f7a737447-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.638908 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d194899b-248c-4ae4-a9e6-202f7a737447-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.638953 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d194899b-248c-4ae4-a9e6-202f7a737447-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.639626 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d194899b-248c-4ae4-a9e6-202f7a737447-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.639685 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d194899b-248c-4ae4-a9e6-202f7a737447-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.639713 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d194899b-248c-4ae4-a9e6-202f7a737447-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.639741 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d194899b-248c-4ae4-a9e6-202f7a737447-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.639757 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d194899b-248c-4ae4-a9e6-202f7a737447-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.639778 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d194899b-248c-4ae4-a9e6-202f7a737447-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.639826 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7vxc5\" (UniqueName: \"kubernetes.io/projected/d194899b-248c-4ae4-a9e6-202f7a737447-kube-api-access-7vxc5\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.639847 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/d194899b-248c-4ae4-a9e6-202f7a737447-builder-dockercfg-56v9d-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.639869 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d194899b-248c-4ae4-a9e6-202f7a737447-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.639890 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d194899b-248c-4ae4-a9e6-202f7a737447-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.639905 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/d194899b-248c-4ae4-a9e6-202f7a737447-builder-dockercfg-56v9d-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.640190 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d194899b-248c-4ae4-a9e6-202f7a737447-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.640348 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d194899b-248c-4ae4-a9e6-202f7a737447-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.640374 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d194899b-248c-4ae4-a9e6-202f7a737447-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.640718 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d194899b-248c-4ae4-a9e6-202f7a737447-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.640711 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d194899b-248c-4ae4-a9e6-202f7a737447-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.640834 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d194899b-248c-4ae4-a9e6-202f7a737447-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.641787 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d194899b-248c-4ae4-a9e6-202f7a737447-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.648170 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/d194899b-248c-4ae4-a9e6-202f7a737447-builder-dockercfg-56v9d-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.655710 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/d194899b-248c-4ae4-a9e6-202f7a737447-builder-dockercfg-56v9d-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.666842 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vxc5\" (UniqueName: \"kubernetes.io/projected/d194899b-248c-4ae4-a9e6-202f7a737447-kube-api-access-7vxc5\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 20 00:29:19 crc kubenswrapper[5107]: I0220 00:29:19.767196 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 20 00:29:20 crc kubenswrapper[5107]: I0220 00:29:20.082416 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Feb 20 00:29:20 crc kubenswrapper[5107]: I0220 00:29:20.496456 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb21134f-492a-4861-96c5-1ea2d5074c62" path="/var/lib/kubelet/pods/eb21134f-492a-4861-96c5-1ea2d5074c62/volumes" Feb 20 00:29:20 crc kubenswrapper[5107]: I0220 00:29:20.838949 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"d194899b-248c-4ae4-a9e6-202f7a737447","Type":"ContainerStarted","Data":"6796aa65f0d7a6b80d72b8fc7ef4fc67665c3d745ed3e2876427f785ff17dc07"} Feb 20 00:29:20 crc kubenswrapper[5107]: I0220 00:29:20.839020 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"d194899b-248c-4ae4-a9e6-202f7a737447","Type":"ContainerStarted","Data":"b7c875535887bc73d0110edf8445f80706a31f9eb3965488622b53726dc936bb"} Feb 20 00:29:21 crc kubenswrapper[5107]: I0220 00:29:21.852738 5107 generic.go:358] "Generic (PLEG): container finished" podID="d194899b-248c-4ae4-a9e6-202f7a737447" containerID="6796aa65f0d7a6b80d72b8fc7ef4fc67665c3d745ed3e2876427f785ff17dc07" exitCode=0 Feb 20 00:29:21 crc kubenswrapper[5107]: I0220 00:29:21.852859 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"d194899b-248c-4ae4-a9e6-202f7a737447","Type":"ContainerDied","Data":"6796aa65f0d7a6b80d72b8fc7ef4fc67665c3d745ed3e2876427f785ff17dc07"} Feb 20 00:29:22 crc kubenswrapper[5107]: I0220 00:29:22.863693 5107 generic.go:358] "Generic (PLEG): container finished" podID="d194899b-248c-4ae4-a9e6-202f7a737447" containerID="87443d701a5040d850753dc766eb6355e78a3e36a92e65173bb73f7127f60477" exitCode=0 Feb 20 00:29:22 crc kubenswrapper[5107]: I0220 00:29:22.863796 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"d194899b-248c-4ae4-a9e6-202f7a737447","Type":"ContainerDied","Data":"87443d701a5040d850753dc766eb6355e78a3e36a92e65173bb73f7127f60477"} Feb 20 00:29:22 crc kubenswrapper[5107]: I0220 00:29:22.908848 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_d194899b-248c-4ae4-a9e6-202f7a737447/manage-dockerfile/0.log" Feb 20 00:29:23 crc kubenswrapper[5107]: I0220 00:29:23.875163 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"d194899b-248c-4ae4-a9e6-202f7a737447","Type":"ContainerStarted","Data":"8c9d72c75ddaa14f8e0adb8a58fd5d9e51de8f775fefedff1370a294fac4f009"} Feb 20 00:29:23 crc kubenswrapper[5107]: I0220 00:29:23.898371 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-2-build" podStartSLOduration=4.898354585 podStartE2EDuration="4.898354585s" podCreationTimestamp="2026-02-20 00:29:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:29:23.896990987 +0000 UTC m=+1250.265648573" watchObservedRunningTime="2026-02-20 00:29:23.898354585 +0000 UTC m=+1250.267012141" Feb 20 00:30:00 crc kubenswrapper[5107]: I0220 00:30:00.152777 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525790-8tnqb"] Feb 20 00:30:00 crc kubenswrapper[5107]: I0220 00:30:00.334312 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29525790-7j74f"] Feb 20 00:30:00 crc kubenswrapper[5107]: I0220 00:30:00.334638 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525790-8tnqb" Feb 20 00:30:00 crc kubenswrapper[5107]: I0220 00:30:00.337852 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Feb 20 00:30:00 crc kubenswrapper[5107]: I0220 00:30:00.338239 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Feb 20 00:30:00 crc kubenswrapper[5107]: I0220 00:30:00.340060 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525790-8tnqb"] Feb 20 00:30:00 crc kubenswrapper[5107]: I0220 00:30:00.340088 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525790-7j74f"] Feb 20 00:30:00 crc kubenswrapper[5107]: I0220 00:30:00.340194 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525790-7j74f" Feb 20 00:30:00 crc kubenswrapper[5107]: I0220 00:30:00.343592 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-km7dp\"" Feb 20 00:30:00 crc kubenswrapper[5107]: I0220 00:30:00.344903 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 20 00:30:00 crc kubenswrapper[5107]: I0220 00:30:00.351610 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 20 00:30:00 crc kubenswrapper[5107]: I0220 00:30:00.450404 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf6l6\" (UniqueName: \"kubernetes.io/projected/840d4a82-b92c-43b9-9731-84a5e3d55b65-kube-api-access-rf6l6\") pod \"auto-csr-approver-29525790-7j74f\" (UID: \"840d4a82-b92c-43b9-9731-84a5e3d55b65\") " pod="openshift-infra/auto-csr-approver-29525790-7j74f" Feb 20 00:30:00 crc kubenswrapper[5107]: I0220 00:30:00.450877 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bff88678-c890-4464-a9ec-23cc693be0db-secret-volume\") pod \"collect-profiles-29525790-8tnqb\" (UID: \"bff88678-c890-4464-a9ec-23cc693be0db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525790-8tnqb" Feb 20 00:30:00 crc kubenswrapper[5107]: I0220 00:30:00.451086 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bff88678-c890-4464-a9ec-23cc693be0db-config-volume\") pod \"collect-profiles-29525790-8tnqb\" (UID: \"bff88678-c890-4464-a9ec-23cc693be0db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525790-8tnqb" Feb 20 00:30:00 crc kubenswrapper[5107]: I0220 00:30:00.451255 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pj92\" (UniqueName: \"kubernetes.io/projected/bff88678-c890-4464-a9ec-23cc693be0db-kube-api-access-6pj92\") pod \"collect-profiles-29525790-8tnqb\" (UID: \"bff88678-c890-4464-a9ec-23cc693be0db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525790-8tnqb" Feb 20 00:30:00 crc kubenswrapper[5107]: I0220 00:30:00.552540 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bff88678-c890-4464-a9ec-23cc693be0db-secret-volume\") pod \"collect-profiles-29525790-8tnqb\" (UID: \"bff88678-c890-4464-a9ec-23cc693be0db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525790-8tnqb" Feb 20 00:30:00 crc kubenswrapper[5107]: I0220 00:30:00.552638 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bff88678-c890-4464-a9ec-23cc693be0db-config-volume\") pod \"collect-profiles-29525790-8tnqb\" (UID: \"bff88678-c890-4464-a9ec-23cc693be0db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525790-8tnqb" Feb 20 00:30:00 crc kubenswrapper[5107]: I0220 00:30:00.552909 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6pj92\" (UniqueName: \"kubernetes.io/projected/bff88678-c890-4464-a9ec-23cc693be0db-kube-api-access-6pj92\") pod \"collect-profiles-29525790-8tnqb\" (UID: \"bff88678-c890-4464-a9ec-23cc693be0db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525790-8tnqb" Feb 20 00:30:00 crc kubenswrapper[5107]: I0220 00:30:00.552970 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rf6l6\" (UniqueName: \"kubernetes.io/projected/840d4a82-b92c-43b9-9731-84a5e3d55b65-kube-api-access-rf6l6\") pod \"auto-csr-approver-29525790-7j74f\" (UID: \"840d4a82-b92c-43b9-9731-84a5e3d55b65\") " pod="openshift-infra/auto-csr-approver-29525790-7j74f" Feb 20 00:30:00 crc kubenswrapper[5107]: I0220 00:30:00.555022 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bff88678-c890-4464-a9ec-23cc693be0db-config-volume\") pod \"collect-profiles-29525790-8tnqb\" (UID: \"bff88678-c890-4464-a9ec-23cc693be0db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525790-8tnqb" Feb 20 00:30:00 crc kubenswrapper[5107]: I0220 00:30:00.575459 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bff88678-c890-4464-a9ec-23cc693be0db-secret-volume\") pod \"collect-profiles-29525790-8tnqb\" (UID: \"bff88678-c890-4464-a9ec-23cc693be0db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525790-8tnqb" Feb 20 00:30:00 crc kubenswrapper[5107]: I0220 00:30:00.582416 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf6l6\" (UniqueName: \"kubernetes.io/projected/840d4a82-b92c-43b9-9731-84a5e3d55b65-kube-api-access-rf6l6\") pod \"auto-csr-approver-29525790-7j74f\" (UID: \"840d4a82-b92c-43b9-9731-84a5e3d55b65\") " pod="openshift-infra/auto-csr-approver-29525790-7j74f" Feb 20 00:30:00 crc kubenswrapper[5107]: I0220 00:30:00.585507 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pj92\" (UniqueName: \"kubernetes.io/projected/bff88678-c890-4464-a9ec-23cc693be0db-kube-api-access-6pj92\") pod \"collect-profiles-29525790-8tnqb\" (UID: \"bff88678-c890-4464-a9ec-23cc693be0db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525790-8tnqb" Feb 20 00:30:00 crc kubenswrapper[5107]: I0220 00:30:00.677860 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525790-8tnqb" Feb 20 00:30:00 crc kubenswrapper[5107]: I0220 00:30:00.689553 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525790-7j74f" Feb 20 00:30:01 crc kubenswrapper[5107]: I0220 00:30:01.116476 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525790-8tnqb"] Feb 20 00:30:01 crc kubenswrapper[5107]: I0220 00:30:01.127705 5107 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 20 00:30:01 crc kubenswrapper[5107]: I0220 00:30:01.148964 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525790-7j74f"] Feb 20 00:30:01 crc kubenswrapper[5107]: W0220 00:30:01.159804 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod840d4a82_b92c_43b9_9731_84a5e3d55b65.slice/crio-2da4cd350336b391a337ae362753928b7feb66f30049adb037783b552027c23e WatchSource:0}: Error finding container 2da4cd350336b391a337ae362753928b7feb66f30049adb037783b552027c23e: Status 404 returned error can't find the container with id 2da4cd350336b391a337ae362753928b7feb66f30049adb037783b552027c23e Feb 20 00:30:01 crc kubenswrapper[5107]: I0220 00:30:01.197553 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525790-8tnqb" event={"ID":"bff88678-c890-4464-a9ec-23cc693be0db","Type":"ContainerStarted","Data":"2553eab2ef8d0592cb764b528eba80850c6a394b79f44f13b144d11e8c0e6505"} Feb 20 00:30:01 crc kubenswrapper[5107]: I0220 00:30:01.198703 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525790-7j74f" event={"ID":"840d4a82-b92c-43b9-9731-84a5e3d55b65","Type":"ContainerStarted","Data":"2da4cd350336b391a337ae362753928b7feb66f30049adb037783b552027c23e"} Feb 20 00:30:02 crc kubenswrapper[5107]: I0220 00:30:02.210303 5107 generic.go:358] "Generic (PLEG): container finished" podID="bff88678-c890-4464-a9ec-23cc693be0db" containerID="b94f2b09dd604ed9114652f4dcb56c5329ff40b53cda9fc9bbcc48887c108545" exitCode=0 Feb 20 00:30:02 crc kubenswrapper[5107]: I0220 00:30:02.210372 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525790-8tnqb" event={"ID":"bff88678-c890-4464-a9ec-23cc693be0db","Type":"ContainerDied","Data":"b94f2b09dd604ed9114652f4dcb56c5329ff40b53cda9fc9bbcc48887c108545"} Feb 20 00:30:02 crc kubenswrapper[5107]: I0220 00:30:02.824069 5107 patch_prober.go:28] interesting pod/machine-config-daemon-5bqkx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:30:02 crc kubenswrapper[5107]: I0220 00:30:02.824237 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:30:03 crc kubenswrapper[5107]: I0220 00:30:03.419224 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525790-8tnqb" Feb 20 00:30:03 crc kubenswrapper[5107]: I0220 00:30:03.606078 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bff88678-c890-4464-a9ec-23cc693be0db-config-volume\") pod \"bff88678-c890-4464-a9ec-23cc693be0db\" (UID: \"bff88678-c890-4464-a9ec-23cc693be0db\") " Feb 20 00:30:03 crc kubenswrapper[5107]: I0220 00:30:03.606160 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bff88678-c890-4464-a9ec-23cc693be0db-secret-volume\") pod \"bff88678-c890-4464-a9ec-23cc693be0db\" (UID: \"bff88678-c890-4464-a9ec-23cc693be0db\") " Feb 20 00:30:03 crc kubenswrapper[5107]: I0220 00:30:03.606261 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pj92\" (UniqueName: \"kubernetes.io/projected/bff88678-c890-4464-a9ec-23cc693be0db-kube-api-access-6pj92\") pod \"bff88678-c890-4464-a9ec-23cc693be0db\" (UID: \"bff88678-c890-4464-a9ec-23cc693be0db\") " Feb 20 00:30:03 crc kubenswrapper[5107]: I0220 00:30:03.606757 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bff88678-c890-4464-a9ec-23cc693be0db-config-volume" (OuterVolumeSpecName: "config-volume") pod "bff88678-c890-4464-a9ec-23cc693be0db" (UID: "bff88678-c890-4464-a9ec-23cc693be0db"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:30:03 crc kubenswrapper[5107]: I0220 00:30:03.612314 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bff88678-c890-4464-a9ec-23cc693be0db-kube-api-access-6pj92" (OuterVolumeSpecName: "kube-api-access-6pj92") pod "bff88678-c890-4464-a9ec-23cc693be0db" (UID: "bff88678-c890-4464-a9ec-23cc693be0db"). InnerVolumeSpecName "kube-api-access-6pj92". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:30:03 crc kubenswrapper[5107]: I0220 00:30:03.620347 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bff88678-c890-4464-a9ec-23cc693be0db-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bff88678-c890-4464-a9ec-23cc693be0db" (UID: "bff88678-c890-4464-a9ec-23cc693be0db"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:30:03 crc kubenswrapper[5107]: I0220 00:30:03.707438 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6pj92\" (UniqueName: \"kubernetes.io/projected/bff88678-c890-4464-a9ec-23cc693be0db-kube-api-access-6pj92\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:03 crc kubenswrapper[5107]: I0220 00:30:03.707493 5107 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bff88678-c890-4464-a9ec-23cc693be0db-config-volume\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:03 crc kubenswrapper[5107]: I0220 00:30:03.707510 5107 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bff88678-c890-4464-a9ec-23cc693be0db-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:04 crc kubenswrapper[5107]: I0220 00:30:04.228310 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525790-8tnqb" Feb 20 00:30:04 crc kubenswrapper[5107]: I0220 00:30:04.228323 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525790-8tnqb" event={"ID":"bff88678-c890-4464-a9ec-23cc693be0db","Type":"ContainerDied","Data":"2553eab2ef8d0592cb764b528eba80850c6a394b79f44f13b144d11e8c0e6505"} Feb 20 00:30:04 crc kubenswrapper[5107]: I0220 00:30:04.228750 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2553eab2ef8d0592cb764b528eba80850c6a394b79f44f13b144d11e8c0e6505" Feb 20 00:30:04 crc kubenswrapper[5107]: I0220 00:30:04.230597 5107 generic.go:358] "Generic (PLEG): container finished" podID="840d4a82-b92c-43b9-9731-84a5e3d55b65" containerID="553e606cc12b71623bd51c358c9f6f13b9d144041240ac5ee1eb28e832e867c0" exitCode=0 Feb 20 00:30:04 crc kubenswrapper[5107]: I0220 00:30:04.230725 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525790-7j74f" event={"ID":"840d4a82-b92c-43b9-9731-84a5e3d55b65","Type":"ContainerDied","Data":"553e606cc12b71623bd51c358c9f6f13b9d144041240ac5ee1eb28e832e867c0"} Feb 20 00:30:05 crc kubenswrapper[5107]: I0220 00:30:05.491014 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525790-7j74f" Feb 20 00:30:05 crc kubenswrapper[5107]: I0220 00:30:05.528354 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rf6l6\" (UniqueName: \"kubernetes.io/projected/840d4a82-b92c-43b9-9731-84a5e3d55b65-kube-api-access-rf6l6\") pod \"840d4a82-b92c-43b9-9731-84a5e3d55b65\" (UID: \"840d4a82-b92c-43b9-9731-84a5e3d55b65\") " Feb 20 00:30:05 crc kubenswrapper[5107]: I0220 00:30:05.544386 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/840d4a82-b92c-43b9-9731-84a5e3d55b65-kube-api-access-rf6l6" (OuterVolumeSpecName: "kube-api-access-rf6l6") pod "840d4a82-b92c-43b9-9731-84a5e3d55b65" (UID: "840d4a82-b92c-43b9-9731-84a5e3d55b65"). InnerVolumeSpecName "kube-api-access-rf6l6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:30:05 crc kubenswrapper[5107]: I0220 00:30:05.630381 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rf6l6\" (UniqueName: \"kubernetes.io/projected/840d4a82-b92c-43b9-9731-84a5e3d55b65-kube-api-access-rf6l6\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:06 crc kubenswrapper[5107]: I0220 00:30:06.246616 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525790-7j74f" Feb 20 00:30:06 crc kubenswrapper[5107]: I0220 00:30:06.246655 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525790-7j74f" event={"ID":"840d4a82-b92c-43b9-9731-84a5e3d55b65","Type":"ContainerDied","Data":"2da4cd350336b391a337ae362753928b7feb66f30049adb037783b552027c23e"} Feb 20 00:30:06 crc kubenswrapper[5107]: I0220 00:30:06.247293 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2da4cd350336b391a337ae362753928b7feb66f30049adb037783b552027c23e" Feb 20 00:30:06 crc kubenswrapper[5107]: I0220 00:30:06.557435 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29525784-prptk"] Feb 20 00:30:06 crc kubenswrapper[5107]: I0220 00:30:06.561986 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29525784-prptk"] Feb 20 00:30:08 crc kubenswrapper[5107]: I0220 00:30:08.494782 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94ab841c-7137-4098-ab2a-00c93ae365cd" path="/var/lib/kubelet/pods/94ab841c-7137-4098-ab2a-00c93ae365cd/volumes" Feb 20 00:30:19 crc kubenswrapper[5107]: I0220 00:30:19.372970 5107 generic.go:358] "Generic (PLEG): container finished" podID="d194899b-248c-4ae4-a9e6-202f7a737447" containerID="8c9d72c75ddaa14f8e0adb8a58fd5d9e51de8f775fefedff1370a294fac4f009" exitCode=0 Feb 20 00:30:19 crc kubenswrapper[5107]: I0220 00:30:19.373095 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"d194899b-248c-4ae4-a9e6-202f7a737447","Type":"ContainerDied","Data":"8c9d72c75ddaa14f8e0adb8a58fd5d9e51de8f775fefedff1370a294fac4f009"} Feb 20 00:30:20 crc kubenswrapper[5107]: I0220 00:30:20.723032 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 20 00:30:20 crc kubenswrapper[5107]: I0220 00:30:20.871233 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d194899b-248c-4ae4-a9e6-202f7a737447-node-pullsecrets\") pod \"d194899b-248c-4ae4-a9e6-202f7a737447\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " Feb 20 00:30:20 crc kubenswrapper[5107]: I0220 00:30:20.871286 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d194899b-248c-4ae4-a9e6-202f7a737447-container-storage-run\") pod \"d194899b-248c-4ae4-a9e6-202f7a737447\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " Feb 20 00:30:20 crc kubenswrapper[5107]: I0220 00:30:20.871317 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d194899b-248c-4ae4-a9e6-202f7a737447-container-storage-root\") pod \"d194899b-248c-4ae4-a9e6-202f7a737447\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " Feb 20 00:30:20 crc kubenswrapper[5107]: I0220 00:30:20.871371 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d194899b-248c-4ae4-a9e6-202f7a737447-build-blob-cache\") pod \"d194899b-248c-4ae4-a9e6-202f7a737447\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " Feb 20 00:30:20 crc kubenswrapper[5107]: I0220 00:30:20.871438 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/d194899b-248c-4ae4-a9e6-202f7a737447-builder-dockercfg-56v9d-push\") pod \"d194899b-248c-4ae4-a9e6-202f7a737447\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " Feb 20 00:30:20 crc kubenswrapper[5107]: I0220 00:30:20.871511 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d194899b-248c-4ae4-a9e6-202f7a737447-buildcachedir\") pod \"d194899b-248c-4ae4-a9e6-202f7a737447\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " Feb 20 00:30:20 crc kubenswrapper[5107]: I0220 00:30:20.871534 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vxc5\" (UniqueName: \"kubernetes.io/projected/d194899b-248c-4ae4-a9e6-202f7a737447-kube-api-access-7vxc5\") pod \"d194899b-248c-4ae4-a9e6-202f7a737447\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " Feb 20 00:30:20 crc kubenswrapper[5107]: I0220 00:30:20.871569 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d194899b-248c-4ae4-a9e6-202f7a737447-build-ca-bundles\") pod \"d194899b-248c-4ae4-a9e6-202f7a737447\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " Feb 20 00:30:20 crc kubenswrapper[5107]: I0220 00:30:20.871622 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d194899b-248c-4ae4-a9e6-202f7a737447-build-proxy-ca-bundles\") pod \"d194899b-248c-4ae4-a9e6-202f7a737447\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " Feb 20 00:30:20 crc kubenswrapper[5107]: I0220 00:30:20.871635 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d194899b-248c-4ae4-a9e6-202f7a737447-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "d194899b-248c-4ae4-a9e6-202f7a737447" (UID: "d194899b-248c-4ae4-a9e6-202f7a737447"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:30:20 crc kubenswrapper[5107]: I0220 00:30:20.871673 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d194899b-248c-4ae4-a9e6-202f7a737447-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "d194899b-248c-4ae4-a9e6-202f7a737447" (UID: "d194899b-248c-4ae4-a9e6-202f7a737447"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:30:20 crc kubenswrapper[5107]: I0220 00:30:20.871821 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/d194899b-248c-4ae4-a9e6-202f7a737447-builder-dockercfg-56v9d-pull\") pod \"d194899b-248c-4ae4-a9e6-202f7a737447\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " Feb 20 00:30:20 crc kubenswrapper[5107]: I0220 00:30:20.871921 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d194899b-248c-4ae4-a9e6-202f7a737447-build-system-configs\") pod \"d194899b-248c-4ae4-a9e6-202f7a737447\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " Feb 20 00:30:20 crc kubenswrapper[5107]: I0220 00:30:20.871984 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d194899b-248c-4ae4-a9e6-202f7a737447-buildworkdir\") pod \"d194899b-248c-4ae4-a9e6-202f7a737447\" (UID: \"d194899b-248c-4ae4-a9e6-202f7a737447\") " Feb 20 00:30:20 crc kubenswrapper[5107]: I0220 00:30:20.872727 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d194899b-248c-4ae4-a9e6-202f7a737447-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "d194899b-248c-4ae4-a9e6-202f7a737447" (UID: "d194899b-248c-4ae4-a9e6-202f7a737447"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:30:20 crc kubenswrapper[5107]: I0220 00:30:20.872775 5107 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d194899b-248c-4ae4-a9e6-202f7a737447-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:20 crc kubenswrapper[5107]: I0220 00:30:20.872803 5107 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d194899b-248c-4ae4-a9e6-202f7a737447-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:20 crc kubenswrapper[5107]: I0220 00:30:20.872738 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d194899b-248c-4ae4-a9e6-202f7a737447-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "d194899b-248c-4ae4-a9e6-202f7a737447" (UID: "d194899b-248c-4ae4-a9e6-202f7a737447"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:30:20 crc kubenswrapper[5107]: I0220 00:30:20.873128 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d194899b-248c-4ae4-a9e6-202f7a737447-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "d194899b-248c-4ae4-a9e6-202f7a737447" (UID: "d194899b-248c-4ae4-a9e6-202f7a737447"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:30:20 crc kubenswrapper[5107]: I0220 00:30:20.874346 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d194899b-248c-4ae4-a9e6-202f7a737447-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "d194899b-248c-4ae4-a9e6-202f7a737447" (UID: "d194899b-248c-4ae4-a9e6-202f7a737447"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:30:20 crc kubenswrapper[5107]: I0220 00:30:20.875800 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d194899b-248c-4ae4-a9e6-202f7a737447-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "d194899b-248c-4ae4-a9e6-202f7a737447" (UID: "d194899b-248c-4ae4-a9e6-202f7a737447"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:30:20 crc kubenswrapper[5107]: I0220 00:30:20.879217 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d194899b-248c-4ae4-a9e6-202f7a737447-builder-dockercfg-56v9d-push" (OuterVolumeSpecName: "builder-dockercfg-56v9d-push") pod "d194899b-248c-4ae4-a9e6-202f7a737447" (UID: "d194899b-248c-4ae4-a9e6-202f7a737447"). InnerVolumeSpecName "builder-dockercfg-56v9d-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:30:20 crc kubenswrapper[5107]: I0220 00:30:20.883201 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d194899b-248c-4ae4-a9e6-202f7a737447-builder-dockercfg-56v9d-pull" (OuterVolumeSpecName: "builder-dockercfg-56v9d-pull") pod "d194899b-248c-4ae4-a9e6-202f7a737447" (UID: "d194899b-248c-4ae4-a9e6-202f7a737447"). InnerVolumeSpecName "builder-dockercfg-56v9d-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:30:20 crc kubenswrapper[5107]: I0220 00:30:20.883351 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d194899b-248c-4ae4-a9e6-202f7a737447-kube-api-access-7vxc5" (OuterVolumeSpecName: "kube-api-access-7vxc5") pod "d194899b-248c-4ae4-a9e6-202f7a737447" (UID: "d194899b-248c-4ae4-a9e6-202f7a737447"). InnerVolumeSpecName "kube-api-access-7vxc5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:30:20 crc kubenswrapper[5107]: I0220 00:30:20.974447 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7vxc5\" (UniqueName: \"kubernetes.io/projected/d194899b-248c-4ae4-a9e6-202f7a737447-kube-api-access-7vxc5\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:20 crc kubenswrapper[5107]: I0220 00:30:20.974488 5107 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d194899b-248c-4ae4-a9e6-202f7a737447-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:20 crc kubenswrapper[5107]: I0220 00:30:20.974501 5107 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d194899b-248c-4ae4-a9e6-202f7a737447-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:20 crc kubenswrapper[5107]: I0220 00:30:20.974513 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/d194899b-248c-4ae4-a9e6-202f7a737447-builder-dockercfg-56v9d-pull\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:20 crc kubenswrapper[5107]: I0220 00:30:20.974525 5107 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d194899b-248c-4ae4-a9e6-202f7a737447-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:20 crc kubenswrapper[5107]: I0220 00:30:20.974537 5107 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d194899b-248c-4ae4-a9e6-202f7a737447-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:20 crc kubenswrapper[5107]: I0220 00:30:20.974547 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d194899b-248c-4ae4-a9e6-202f7a737447-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:20 crc kubenswrapper[5107]: I0220 00:30:20.974582 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/d194899b-248c-4ae4-a9e6-202f7a737447-builder-dockercfg-56v9d-push\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:21 crc kubenswrapper[5107]: I0220 00:30:21.014074 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d194899b-248c-4ae4-a9e6-202f7a737447-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "d194899b-248c-4ae4-a9e6-202f7a737447" (UID: "d194899b-248c-4ae4-a9e6-202f7a737447"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:30:21 crc kubenswrapper[5107]: I0220 00:30:21.076210 5107 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d194899b-248c-4ae4-a9e6-202f7a737447-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:21 crc kubenswrapper[5107]: I0220 00:30:21.395299 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"d194899b-248c-4ae4-a9e6-202f7a737447","Type":"ContainerDied","Data":"b7c875535887bc73d0110edf8445f80706a31f9eb3965488622b53726dc936bb"} Feb 20 00:30:21 crc kubenswrapper[5107]: I0220 00:30:21.395683 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7c875535887bc73d0110edf8445f80706a31f9eb3965488622b53726dc936bb" Feb 20 00:30:21 crc kubenswrapper[5107]: I0220 00:30:21.395690 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 20 00:30:22 crc kubenswrapper[5107]: I0220 00:30:22.336017 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d194899b-248c-4ae4-a9e6-202f7a737447-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "d194899b-248c-4ae4-a9e6-202f7a737447" (UID: "d194899b-248c-4ae4-a9e6-202f7a737447"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:30:22 crc kubenswrapper[5107]: I0220 00:30:22.401390 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d194899b-248c-4ae4-a9e6-202f7a737447-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.408981 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.410707 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="840d4a82-b92c-43b9-9731-84a5e3d55b65" containerName="oc" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.410730 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="840d4a82-b92c-43b9-9731-84a5e3d55b65" containerName="oc" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.410750 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bff88678-c890-4464-a9ec-23cc693be0db" containerName="collect-profiles" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.410759 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="bff88678-c890-4464-a9ec-23cc693be0db" containerName="collect-profiles" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.410776 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d194899b-248c-4ae4-a9e6-202f7a737447" containerName="manage-dockerfile" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.410786 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="d194899b-248c-4ae4-a9e6-202f7a737447" containerName="manage-dockerfile" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.410807 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d194899b-248c-4ae4-a9e6-202f7a737447" containerName="git-clone" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.410816 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="d194899b-248c-4ae4-a9e6-202f7a737447" containerName="git-clone" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.410859 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d194899b-248c-4ae4-a9e6-202f7a737447" containerName="docker-build" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.410869 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="d194899b-248c-4ae4-a9e6-202f7a737447" containerName="docker-build" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.411017 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="d194899b-248c-4ae4-a9e6-202f7a737447" containerName="docker-build" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.411043 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="840d4a82-b92c-43b9-9731-84a5e3d55b65" containerName="oc" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.411055 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="bff88678-c890-4464-a9ec-23cc693be0db" containerName="collect-profiles" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.568081 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.568476 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.570851 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-bundle-1-ca\"" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.570895 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-bundle-1-global-ca\"" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.571021 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-56v9d\"" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.571777 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-bundle-1-sys-config\"" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.636226 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a7129f93-0131-4fdb-af46-aa1f9795292d-build-system-configs\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.636325 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a7129f93-0131-4fdb-af46-aa1f9795292d-buildcachedir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.636356 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a7129f93-0131-4fdb-af46-aa1f9795292d-buildworkdir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.636457 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/a7129f93-0131-4fdb-af46-aa1f9795292d-builder-dockercfg-56v9d-push\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.636490 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a7129f93-0131-4fdb-af46-aa1f9795292d-container-storage-root\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.636510 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a7129f93-0131-4fdb-af46-aa1f9795292d-build-blob-cache\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.636536 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/a7129f93-0131-4fdb-af46-aa1f9795292d-builder-dockercfg-56v9d-pull\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.636565 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnjq4\" (UniqueName: \"kubernetes.io/projected/a7129f93-0131-4fdb-af46-aa1f9795292d-kube-api-access-lnjq4\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.636598 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a7129f93-0131-4fdb-af46-aa1f9795292d-node-pullsecrets\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.636709 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a7129f93-0131-4fdb-af46-aa1f9795292d-container-storage-run\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.636794 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a7129f93-0131-4fdb-af46-aa1f9795292d-build-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.636877 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a7129f93-0131-4fdb-af46-aa1f9795292d-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.738436 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a7129f93-0131-4fdb-af46-aa1f9795292d-container-storage-run\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.738539 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a7129f93-0131-4fdb-af46-aa1f9795292d-build-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.738617 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a7129f93-0131-4fdb-af46-aa1f9795292d-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.738675 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a7129f93-0131-4fdb-af46-aa1f9795292d-build-system-configs\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.738905 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a7129f93-0131-4fdb-af46-aa1f9795292d-buildcachedir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.739012 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a7129f93-0131-4fdb-af46-aa1f9795292d-buildworkdir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.739130 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/a7129f93-0131-4fdb-af46-aa1f9795292d-builder-dockercfg-56v9d-push\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.739204 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a7129f93-0131-4fdb-af46-aa1f9795292d-container-storage-root\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.739252 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a7129f93-0131-4fdb-af46-aa1f9795292d-build-blob-cache\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.739328 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/a7129f93-0131-4fdb-af46-aa1f9795292d-builder-dockercfg-56v9d-pull\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.739422 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lnjq4\" (UniqueName: \"kubernetes.io/projected/a7129f93-0131-4fdb-af46-aa1f9795292d-kube-api-access-lnjq4\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.739536 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a7129f93-0131-4fdb-af46-aa1f9795292d-node-pullsecrets\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.739871 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a7129f93-0131-4fdb-af46-aa1f9795292d-node-pullsecrets\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.739902 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a7129f93-0131-4fdb-af46-aa1f9795292d-buildcachedir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.739949 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a7129f93-0131-4fdb-af46-aa1f9795292d-build-system-configs\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.740047 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a7129f93-0131-4fdb-af46-aa1f9795292d-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.740635 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a7129f93-0131-4fdb-af46-aa1f9795292d-container-storage-root\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.740726 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a7129f93-0131-4fdb-af46-aa1f9795292d-container-storage-run\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.740899 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a7129f93-0131-4fdb-af46-aa1f9795292d-buildworkdir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.741226 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a7129f93-0131-4fdb-af46-aa1f9795292d-build-blob-cache\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.741912 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a7129f93-0131-4fdb-af46-aa1f9795292d-build-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.749935 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/a7129f93-0131-4fdb-af46-aa1f9795292d-builder-dockercfg-56v9d-pull\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.753102 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/a7129f93-0131-4fdb-af46-aa1f9795292d-builder-dockercfg-56v9d-push\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.773039 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnjq4\" (UniqueName: \"kubernetes.io/projected/a7129f93-0131-4fdb-af46-aa1f9795292d-kube-api-access-lnjq4\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 20 00:30:30 crc kubenswrapper[5107]: I0220 00:30:30.890117 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 20 00:30:31 crc kubenswrapper[5107]: I0220 00:30:31.180035 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Feb 20 00:30:31 crc kubenswrapper[5107]: I0220 00:30:31.475171 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"a7129f93-0131-4fdb-af46-aa1f9795292d","Type":"ContainerStarted","Data":"87153bb6c6360b569dce5e5470fb3eeb03e20bdc15346ed2f0900a7f62635def"} Feb 20 00:30:32 crc kubenswrapper[5107]: I0220 00:30:32.488561 5107 generic.go:358] "Generic (PLEG): container finished" podID="a7129f93-0131-4fdb-af46-aa1f9795292d" containerID="76dc8190e7ddff617efee41ab0e8c056ab7bfda4b3e402d816280e83ba4c06bc" exitCode=0 Feb 20 00:30:32 crc kubenswrapper[5107]: I0220 00:30:32.502928 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"a7129f93-0131-4fdb-af46-aa1f9795292d","Type":"ContainerDied","Data":"76dc8190e7ddff617efee41ab0e8c056ab7bfda4b3e402d816280e83ba4c06bc"} Feb 20 00:30:32 crc kubenswrapper[5107]: I0220 00:30:32.824041 5107 patch_prober.go:28] interesting pod/machine-config-daemon-5bqkx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:30:32 crc kubenswrapper[5107]: I0220 00:30:32.824123 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:30:33 crc kubenswrapper[5107]: I0220 00:30:33.499066 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-1-build_a7129f93-0131-4fdb-af46-aa1f9795292d/docker-build/0.log" Feb 20 00:30:33 crc kubenswrapper[5107]: I0220 00:30:33.500136 5107 generic.go:358] "Generic (PLEG): container finished" podID="a7129f93-0131-4fdb-af46-aa1f9795292d" containerID="25c7ee7cab5f21b8f45e3cde3fed2ff94669b441bad9da77494179ac94fde768" exitCode=1 Feb 20 00:30:33 crc kubenswrapper[5107]: I0220 00:30:33.500307 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"a7129f93-0131-4fdb-af46-aa1f9795292d","Type":"ContainerDied","Data":"25c7ee7cab5f21b8f45e3cde3fed2ff94669b441bad9da77494179ac94fde768"} Feb 20 00:30:34 crc kubenswrapper[5107]: I0220 00:30:34.873710 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-1-build_a7129f93-0131-4fdb-af46-aa1f9795292d/docker-build/0.log" Feb 20 00:30:34 crc kubenswrapper[5107]: I0220 00:30:34.874779 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.032868 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a7129f93-0131-4fdb-af46-aa1f9795292d-build-proxy-ca-bundles\") pod \"a7129f93-0131-4fdb-af46-aa1f9795292d\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.032956 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/a7129f93-0131-4fdb-af46-aa1f9795292d-builder-dockercfg-56v9d-push\") pod \"a7129f93-0131-4fdb-af46-aa1f9795292d\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.032989 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a7129f93-0131-4fdb-af46-aa1f9795292d-container-storage-run\") pod \"a7129f93-0131-4fdb-af46-aa1f9795292d\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.033228 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a7129f93-0131-4fdb-af46-aa1f9795292d-build-ca-bundles\") pod \"a7129f93-0131-4fdb-af46-aa1f9795292d\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.033268 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/a7129f93-0131-4fdb-af46-aa1f9795292d-builder-dockercfg-56v9d-pull\") pod \"a7129f93-0131-4fdb-af46-aa1f9795292d\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.033301 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a7129f93-0131-4fdb-af46-aa1f9795292d-node-pullsecrets\") pod \"a7129f93-0131-4fdb-af46-aa1f9795292d\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.033353 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a7129f93-0131-4fdb-af46-aa1f9795292d-build-system-configs\") pod \"a7129f93-0131-4fdb-af46-aa1f9795292d\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.033375 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a7129f93-0131-4fdb-af46-aa1f9795292d-buildcachedir\") pod \"a7129f93-0131-4fdb-af46-aa1f9795292d\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.033470 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a7129f93-0131-4fdb-af46-aa1f9795292d-buildworkdir\") pod \"a7129f93-0131-4fdb-af46-aa1f9795292d\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.033506 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnjq4\" (UniqueName: \"kubernetes.io/projected/a7129f93-0131-4fdb-af46-aa1f9795292d-kube-api-access-lnjq4\") pod \"a7129f93-0131-4fdb-af46-aa1f9795292d\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.033564 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a7129f93-0131-4fdb-af46-aa1f9795292d-container-storage-root\") pod \"a7129f93-0131-4fdb-af46-aa1f9795292d\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.033616 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a7129f93-0131-4fdb-af46-aa1f9795292d-build-blob-cache\") pod \"a7129f93-0131-4fdb-af46-aa1f9795292d\" (UID: \"a7129f93-0131-4fdb-af46-aa1f9795292d\") " Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.034298 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7129f93-0131-4fdb-af46-aa1f9795292d-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "a7129f93-0131-4fdb-af46-aa1f9795292d" (UID: "a7129f93-0131-4fdb-af46-aa1f9795292d"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.034612 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7129f93-0131-4fdb-af46-aa1f9795292d-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "a7129f93-0131-4fdb-af46-aa1f9795292d" (UID: "a7129f93-0131-4fdb-af46-aa1f9795292d"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.035524 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7129f93-0131-4fdb-af46-aa1f9795292d-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "a7129f93-0131-4fdb-af46-aa1f9795292d" (UID: "a7129f93-0131-4fdb-af46-aa1f9795292d"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.035596 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7129f93-0131-4fdb-af46-aa1f9795292d-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "a7129f93-0131-4fdb-af46-aa1f9795292d" (UID: "a7129f93-0131-4fdb-af46-aa1f9795292d"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.035973 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7129f93-0131-4fdb-af46-aa1f9795292d-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "a7129f93-0131-4fdb-af46-aa1f9795292d" (UID: "a7129f93-0131-4fdb-af46-aa1f9795292d"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.036049 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7129f93-0131-4fdb-af46-aa1f9795292d-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "a7129f93-0131-4fdb-af46-aa1f9795292d" (UID: "a7129f93-0131-4fdb-af46-aa1f9795292d"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.036503 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7129f93-0131-4fdb-af46-aa1f9795292d-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "a7129f93-0131-4fdb-af46-aa1f9795292d" (UID: "a7129f93-0131-4fdb-af46-aa1f9795292d"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.036630 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7129f93-0131-4fdb-af46-aa1f9795292d-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "a7129f93-0131-4fdb-af46-aa1f9795292d" (UID: "a7129f93-0131-4fdb-af46-aa1f9795292d"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.036845 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7129f93-0131-4fdb-af46-aa1f9795292d-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "a7129f93-0131-4fdb-af46-aa1f9795292d" (UID: "a7129f93-0131-4fdb-af46-aa1f9795292d"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.055335 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7129f93-0131-4fdb-af46-aa1f9795292d-kube-api-access-lnjq4" (OuterVolumeSpecName: "kube-api-access-lnjq4") pod "a7129f93-0131-4fdb-af46-aa1f9795292d" (UID: "a7129f93-0131-4fdb-af46-aa1f9795292d"). InnerVolumeSpecName "kube-api-access-lnjq4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.056300 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7129f93-0131-4fdb-af46-aa1f9795292d-builder-dockercfg-56v9d-pull" (OuterVolumeSpecName: "builder-dockercfg-56v9d-pull") pod "a7129f93-0131-4fdb-af46-aa1f9795292d" (UID: "a7129f93-0131-4fdb-af46-aa1f9795292d"). InnerVolumeSpecName "builder-dockercfg-56v9d-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.079528 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7129f93-0131-4fdb-af46-aa1f9795292d-builder-dockercfg-56v9d-push" (OuterVolumeSpecName: "builder-dockercfg-56v9d-push") pod "a7129f93-0131-4fdb-af46-aa1f9795292d" (UID: "a7129f93-0131-4fdb-af46-aa1f9795292d"). InnerVolumeSpecName "builder-dockercfg-56v9d-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.134765 5107 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a7129f93-0131-4fdb-af46-aa1f9795292d-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.134804 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/a7129f93-0131-4fdb-af46-aa1f9795292d-builder-dockercfg-56v9d-push\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.134817 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a7129f93-0131-4fdb-af46-aa1f9795292d-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.134829 5107 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a7129f93-0131-4fdb-af46-aa1f9795292d-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.134840 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/a7129f93-0131-4fdb-af46-aa1f9795292d-builder-dockercfg-56v9d-pull\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.134854 5107 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a7129f93-0131-4fdb-af46-aa1f9795292d-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.134865 5107 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a7129f93-0131-4fdb-af46-aa1f9795292d-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.134875 5107 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a7129f93-0131-4fdb-af46-aa1f9795292d-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.134886 5107 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a7129f93-0131-4fdb-af46-aa1f9795292d-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.134897 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lnjq4\" (UniqueName: \"kubernetes.io/projected/a7129f93-0131-4fdb-af46-aa1f9795292d-kube-api-access-lnjq4\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.134907 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a7129f93-0131-4fdb-af46-aa1f9795292d-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.134918 5107 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a7129f93-0131-4fdb-af46-aa1f9795292d-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.528525 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-1-build_a7129f93-0131-4fdb-af46-aa1f9795292d/docker-build/0.log" Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.529287 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.529283 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"a7129f93-0131-4fdb-af46-aa1f9795292d","Type":"ContainerDied","Data":"87153bb6c6360b569dce5e5470fb3eeb03e20bdc15346ed2f0900a7f62635def"} Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.529438 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87153bb6c6360b569dce5e5470fb3eeb03e20bdc15346ed2f0900a7f62635def" Feb 20 00:30:35 crc kubenswrapper[5107]: I0220 00:30:35.581503 5107 scope.go:117] "RemoveContainer" containerID="e308a80624a3e0b0f1358657c05a9c082ed9eed8e8c19635a105ea6c96f9be73" Feb 20 00:30:40 crc kubenswrapper[5107]: I0220 00:30:40.851423 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Feb 20 00:30:40 crc kubenswrapper[5107]: I0220 00:30:40.860354 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.402187 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-2-build"] Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.402757 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a7129f93-0131-4fdb-af46-aa1f9795292d" containerName="docker-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.402768 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7129f93-0131-4fdb-af46-aa1f9795292d" containerName="docker-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.402784 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a7129f93-0131-4fdb-af46-aa1f9795292d" containerName="manage-dockerfile" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.402790 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7129f93-0131-4fdb-af46-aa1f9795292d" containerName="manage-dockerfile" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.402897 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="a7129f93-0131-4fdb-af46-aa1f9795292d" containerName="docker-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.411451 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.413805 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-bundle-2-global-ca\"" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.413867 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-56v9d\"" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.417011 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-bundle-2-sys-config\"" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.418050 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-2-build"] Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.418696 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-bundle-2-ca\"" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.446889 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-buildcachedir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.446945 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-builder-dockercfg-56v9d-push\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.447091 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-buildworkdir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.447160 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-build-system-configs\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.447195 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-container-storage-root\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.447216 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-node-pullsecrets\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.447235 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9rgv\" (UniqueName: \"kubernetes.io/projected/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-kube-api-access-v9rgv\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.447285 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-build-blob-cache\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.447311 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-build-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.447366 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.447428 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-container-storage-run\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.447453 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-builder-dockercfg-56v9d-pull\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.494046 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7129f93-0131-4fdb-af46-aa1f9795292d" path="/var/lib/kubelet/pods/a7129f93-0131-4fdb-af46-aa1f9795292d/volumes" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.548817 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-container-storage-run\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.548874 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-builder-dockercfg-56v9d-pull\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.548919 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-buildcachedir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.548947 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-builder-dockercfg-56v9d-push\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.548981 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-buildworkdir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.549011 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-build-system-configs\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.549052 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-container-storage-root\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.549073 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-node-pullsecrets\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.549098 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v9rgv\" (UniqueName: \"kubernetes.io/projected/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-kube-api-access-v9rgv\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.549132 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-build-blob-cache\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.549177 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-build-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.549226 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.549267 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-container-storage-run\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.549330 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-buildcachedir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.549687 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-build-system-configs\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.549740 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-buildworkdir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.549893 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-container-storage-root\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.550080 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-build-blob-cache\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.550254 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-node-pullsecrets\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.550320 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.550534 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-build-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.557995 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-builder-dockercfg-56v9d-pull\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.564191 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-builder-dockercfg-56v9d-push\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.577691 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9rgv\" (UniqueName: \"kubernetes.io/projected/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-kube-api-access-v9rgv\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.726090 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 20 00:30:42 crc kubenswrapper[5107]: I0220 00:30:42.923255 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-2-build"] Feb 20 00:30:43 crc kubenswrapper[5107]: I0220 00:30:43.600119 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90","Type":"ContainerStarted","Data":"2b3e8fd91a3d41b6ef66b47d55b8869745ecf2f979f3c6c04fa31bc6ce8e63d2"} Feb 20 00:30:43 crc kubenswrapper[5107]: I0220 00:30:43.600542 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90","Type":"ContainerStarted","Data":"6f2cdd1b5d00232bbce0ca524da743054a0e8a8b208ae23e9d8129ba8187044d"} Feb 20 00:30:44 crc kubenswrapper[5107]: I0220 00:30:44.608704 5107 generic.go:358] "Generic (PLEG): container finished" podID="dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90" containerID="2b3e8fd91a3d41b6ef66b47d55b8869745ecf2f979f3c6c04fa31bc6ce8e63d2" exitCode=0 Feb 20 00:30:44 crc kubenswrapper[5107]: I0220 00:30:44.608815 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90","Type":"ContainerDied","Data":"2b3e8fd91a3d41b6ef66b47d55b8869745ecf2f979f3c6c04fa31bc6ce8e63d2"} Feb 20 00:30:45 crc kubenswrapper[5107]: I0220 00:30:45.619515 5107 generic.go:358] "Generic (PLEG): container finished" podID="dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90" containerID="d9c56df15c543eeea864e157930fba2afcab73ce96118d1c94f9f9313582e835" exitCode=0 Feb 20 00:30:45 crc kubenswrapper[5107]: I0220 00:30:45.619598 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90","Type":"ContainerDied","Data":"d9c56df15c543eeea864e157930fba2afcab73ce96118d1c94f9f9313582e835"} Feb 20 00:30:45 crc kubenswrapper[5107]: I0220 00:30:45.660139 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-2-build_dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90/manage-dockerfile/0.log" Feb 20 00:30:46 crc kubenswrapper[5107]: I0220 00:30:46.635194 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90","Type":"ContainerStarted","Data":"1c4948bce09c47848f2abbc9f90a39a2aa77586c177f290e48ba02eb6f600587"} Feb 20 00:30:46 crc kubenswrapper[5107]: I0220 00:30:46.663417 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-bundle-2-build" podStartSLOduration=4.663388284 podStartE2EDuration="4.663388284s" podCreationTimestamp="2026-02-20 00:30:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:30:46.661426649 +0000 UTC m=+1333.030084225" watchObservedRunningTime="2026-02-20 00:30:46.663388284 +0000 UTC m=+1333.032045890" Feb 20 00:30:50 crc kubenswrapper[5107]: I0220 00:30:50.671915 5107 generic.go:358] "Generic (PLEG): container finished" podID="dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90" containerID="1c4948bce09c47848f2abbc9f90a39a2aa77586c177f290e48ba02eb6f600587" exitCode=0 Feb 20 00:30:50 crc kubenswrapper[5107]: I0220 00:30:50.672032 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90","Type":"ContainerDied","Data":"1c4948bce09c47848f2abbc9f90a39a2aa77586c177f290e48ba02eb6f600587"} Feb 20 00:30:52 crc kubenswrapper[5107]: I0220 00:30:52.000433 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 20 00:30:52 crc kubenswrapper[5107]: I0220 00:30:52.109004 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-buildcachedir\") pod \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " Feb 20 00:30:52 crc kubenswrapper[5107]: I0220 00:30:52.109055 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-build-proxy-ca-bundles\") pod \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " Feb 20 00:30:52 crc kubenswrapper[5107]: I0220 00:30:52.109093 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-container-storage-run\") pod \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " Feb 20 00:30:52 crc kubenswrapper[5107]: I0220 00:30:52.109156 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-build-system-configs\") pod \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " Feb 20 00:30:52 crc kubenswrapper[5107]: I0220 00:30:52.109181 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9rgv\" (UniqueName: \"kubernetes.io/projected/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-kube-api-access-v9rgv\") pod \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " Feb 20 00:30:52 crc kubenswrapper[5107]: I0220 00:30:52.109183 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90" (UID: "dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:30:52 crc kubenswrapper[5107]: I0220 00:30:52.109222 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-build-ca-bundles\") pod \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " Feb 20 00:30:52 crc kubenswrapper[5107]: I0220 00:30:52.109273 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-buildworkdir\") pod \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " Feb 20 00:30:52 crc kubenswrapper[5107]: I0220 00:30:52.109285 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-node-pullsecrets\") pod \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " Feb 20 00:30:52 crc kubenswrapper[5107]: I0220 00:30:52.109308 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-build-blob-cache\") pod \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " Feb 20 00:30:52 crc kubenswrapper[5107]: I0220 00:30:52.109346 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-builder-dockercfg-56v9d-push\") pod \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " Feb 20 00:30:52 crc kubenswrapper[5107]: I0220 00:30:52.109376 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-container-storage-root\") pod \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " Feb 20 00:30:52 crc kubenswrapper[5107]: I0220 00:30:52.109405 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-builder-dockercfg-56v9d-pull\") pod \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\" (UID: \"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90\") " Feb 20 00:30:52 crc kubenswrapper[5107]: I0220 00:30:52.109483 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90" (UID: "dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:30:52 crc kubenswrapper[5107]: I0220 00:30:52.109857 5107 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:52 crc kubenswrapper[5107]: I0220 00:30:52.109897 5107 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:52 crc kubenswrapper[5107]: I0220 00:30:52.110095 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90" (UID: "dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:30:52 crc kubenswrapper[5107]: I0220 00:30:52.110205 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90" (UID: "dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:30:52 crc kubenswrapper[5107]: I0220 00:30:52.110927 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90" (UID: "dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:30:52 crc kubenswrapper[5107]: I0220 00:30:52.111474 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90" (UID: "dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:30:52 crc kubenswrapper[5107]: I0220 00:30:52.113138 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90" (UID: "dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:30:52 crc kubenswrapper[5107]: I0220 00:30:52.114918 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-builder-dockercfg-56v9d-push" (OuterVolumeSpecName: "builder-dockercfg-56v9d-push") pod "dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90" (UID: "dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90"). InnerVolumeSpecName "builder-dockercfg-56v9d-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:30:52 crc kubenswrapper[5107]: I0220 00:30:52.115329 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-builder-dockercfg-56v9d-pull" (OuterVolumeSpecName: "builder-dockercfg-56v9d-pull") pod "dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90" (UID: "dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90"). InnerVolumeSpecName "builder-dockercfg-56v9d-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:30:52 crc kubenswrapper[5107]: I0220 00:30:52.115541 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-kube-api-access-v9rgv" (OuterVolumeSpecName: "kube-api-access-v9rgv") pod "dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90" (UID: "dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90"). InnerVolumeSpecName "kube-api-access-v9rgv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:30:52 crc kubenswrapper[5107]: I0220 00:30:52.115557 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90" (UID: "dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:30:52 crc kubenswrapper[5107]: I0220 00:30:52.116588 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90" (UID: "dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:30:52 crc kubenswrapper[5107]: I0220 00:30:52.211581 5107 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:52 crc kubenswrapper[5107]: I0220 00:30:52.211626 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-builder-dockercfg-56v9d-push\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:52 crc kubenswrapper[5107]: I0220 00:30:52.211637 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:52 crc kubenswrapper[5107]: I0220 00:30:52.211645 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-builder-dockercfg-56v9d-pull\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:52 crc kubenswrapper[5107]: I0220 00:30:52.211656 5107 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:52 crc kubenswrapper[5107]: I0220 00:30:52.211664 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:52 crc kubenswrapper[5107]: I0220 00:30:52.211671 5107 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:52 crc kubenswrapper[5107]: I0220 00:30:52.211679 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v9rgv\" (UniqueName: \"kubernetes.io/projected/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-kube-api-access-v9rgv\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:52 crc kubenswrapper[5107]: I0220 00:30:52.211686 5107 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:52 crc kubenswrapper[5107]: I0220 00:30:52.211694 5107 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 20 00:30:52 crc kubenswrapper[5107]: I0220 00:30:52.693042 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90","Type":"ContainerDied","Data":"6f2cdd1b5d00232bbce0ca524da743054a0e8a8b208ae23e9d8129ba8187044d"} Feb 20 00:30:52 crc kubenswrapper[5107]: I0220 00:30:52.693653 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f2cdd1b5d00232bbce0ca524da743054a0e8a8b208ae23e9d8129ba8187044d" Feb 20 00:30:52 crc kubenswrapper[5107]: I0220 00:30:52.693090 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.690727 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.692199 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90" containerName="git-clone" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.692218 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90" containerName="git-clone" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.692252 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90" containerName="docker-build" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.692260 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90" containerName="docker-build" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.692283 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90" containerName="manage-dockerfile" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.692291 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90" containerName="manage-dockerfile" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.692436 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="dc8dc5ad-0a3b-4c68-a07f-8d7d7dc1fd90" containerName="docker-build" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.699986 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.703737 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-bundle-1-global-ca\"" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.704450 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-bundle-1-sys-config\"" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.707030 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-56v9d\"" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.713625 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-bundle-1-ca\"" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.714409 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-buildcachedir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.714510 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.714608 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-builder-dockercfg-56v9d-pull\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.714656 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-build-blob-cache\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.714697 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-build-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.714726 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9tt7\" (UniqueName: \"kubernetes.io/projected/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-kube-api-access-s9tt7\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.714772 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-builder-dockercfg-56v9d-push\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.714926 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-container-storage-run\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.714971 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-build-system-configs\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.715103 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-container-storage-root\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.715206 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-buildworkdir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.715251 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-node-pullsecrets\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.720658 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.816507 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-buildcachedir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.816574 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.816624 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-builder-dockercfg-56v9d-pull\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.816648 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-build-blob-cache\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.816679 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-build-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.816702 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s9tt7\" (UniqueName: \"kubernetes.io/projected/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-kube-api-access-s9tt7\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.816710 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-buildcachedir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.816739 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-builder-dockercfg-56v9d-push\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.816823 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-container-storage-run\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.816879 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-build-system-configs\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.817007 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-container-storage-root\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.817082 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-buildworkdir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.817130 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-node-pullsecrets\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.818083 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-container-storage-run\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.818672 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-build-system-configs\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.819022 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-node-pullsecrets\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.819343 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.819614 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-container-storage-root\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.819741 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-buildworkdir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.819762 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-build-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.819697 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-build-blob-cache\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.824338 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-builder-dockercfg-56v9d-push\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.831802 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-builder-dockercfg-56v9d-pull\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 20 00:30:56 crc kubenswrapper[5107]: I0220 00:30:56.845498 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9tt7\" (UniqueName: \"kubernetes.io/projected/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-kube-api-access-s9tt7\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 20 00:30:57 crc kubenswrapper[5107]: I0220 00:30:57.044624 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 20 00:30:57 crc kubenswrapper[5107]: I0220 00:30:57.354502 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Feb 20 00:30:57 crc kubenswrapper[5107]: I0220 00:30:57.749605 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b","Type":"ContainerStarted","Data":"3ed94bdb4e4afba6f4d317bc0134f651039b0ffb3066a90d0328f7b120e8d8df"} Feb 20 00:30:58 crc kubenswrapper[5107]: I0220 00:30:58.761891 5107 generic.go:358] "Generic (PLEG): container finished" podID="0befdcba-1eeb-4fe2-9cd4-5ea1428d584b" containerID="e73eec1422aae264f736e239da0725927404f962c13e11b6b073eb36d8d21aca" exitCode=0 Feb 20 00:30:58 crc kubenswrapper[5107]: I0220 00:30:58.762082 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b","Type":"ContainerDied","Data":"e73eec1422aae264f736e239da0725927404f962c13e11b6b073eb36d8d21aca"} Feb 20 00:30:59 crc kubenswrapper[5107]: I0220 00:30:59.777840 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-1-build_0befdcba-1eeb-4fe2-9cd4-5ea1428d584b/docker-build/0.log" Feb 20 00:30:59 crc kubenswrapper[5107]: I0220 00:30:59.778921 5107 generic.go:358] "Generic (PLEG): container finished" podID="0befdcba-1eeb-4fe2-9cd4-5ea1428d584b" containerID="61f38785f3aeb506c7202f95f268d5313ee73eae05fb53f9c6f8bd41162da3c5" exitCode=1 Feb 20 00:30:59 crc kubenswrapper[5107]: I0220 00:30:59.779050 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b","Type":"ContainerDied","Data":"61f38785f3aeb506c7202f95f268d5313ee73eae05fb53f9c6f8bd41162da3c5"} Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.178172 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-1-build_0befdcba-1eeb-4fe2-9cd4-5ea1428d584b/docker-build/0.log" Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.179626 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.346817 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-builder-dockercfg-56v9d-push\") pod \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.346995 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-node-pullsecrets\") pod \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.347121 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-buildworkdir\") pod \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.347234 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-builder-dockercfg-56v9d-pull\") pod \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.347112 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "0befdcba-1eeb-4fe2-9cd4-5ea1428d584b" (UID: "0befdcba-1eeb-4fe2-9cd4-5ea1428d584b"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.347307 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-build-ca-bundles\") pod \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.347409 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-container-storage-root\") pod \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.347467 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-build-blob-cache\") pod \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.347567 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9tt7\" (UniqueName: \"kubernetes.io/projected/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-kube-api-access-s9tt7\") pod \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.347650 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-build-system-configs\") pod \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.347722 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-build-proxy-ca-bundles\") pod \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.347767 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-buildcachedir\") pod \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.347812 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-container-storage-run\") pod \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\" (UID: \"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b\") " Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.348022 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "0befdcba-1eeb-4fe2-9cd4-5ea1428d584b" (UID: "0befdcba-1eeb-4fe2-9cd4-5ea1428d584b"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.348165 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "0befdcba-1eeb-4fe2-9cd4-5ea1428d584b" (UID: "0befdcba-1eeb-4fe2-9cd4-5ea1428d584b"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.348223 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "0befdcba-1eeb-4fe2-9cd4-5ea1428d584b" (UID: "0befdcba-1eeb-4fe2-9cd4-5ea1428d584b"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.348683 5107 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.348723 5107 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.348741 5107 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.348759 5107 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.348881 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "0befdcba-1eeb-4fe2-9cd4-5ea1428d584b" (UID: "0befdcba-1eeb-4fe2-9cd4-5ea1428d584b"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.349291 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "0befdcba-1eeb-4fe2-9cd4-5ea1428d584b" (UID: "0befdcba-1eeb-4fe2-9cd4-5ea1428d584b"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.349385 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "0befdcba-1eeb-4fe2-9cd4-5ea1428d584b" (UID: "0befdcba-1eeb-4fe2-9cd4-5ea1428d584b"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.349935 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "0befdcba-1eeb-4fe2-9cd4-5ea1428d584b" (UID: "0befdcba-1eeb-4fe2-9cd4-5ea1428d584b"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.351330 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "0befdcba-1eeb-4fe2-9cd4-5ea1428d584b" (UID: "0befdcba-1eeb-4fe2-9cd4-5ea1428d584b"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.355413 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-builder-dockercfg-56v9d-pull" (OuterVolumeSpecName: "builder-dockercfg-56v9d-pull") pod "0befdcba-1eeb-4fe2-9cd4-5ea1428d584b" (UID: "0befdcba-1eeb-4fe2-9cd4-5ea1428d584b"). InnerVolumeSpecName "builder-dockercfg-56v9d-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.356717 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-kube-api-access-s9tt7" (OuterVolumeSpecName: "kube-api-access-s9tt7") pod "0befdcba-1eeb-4fe2-9cd4-5ea1428d584b" (UID: "0befdcba-1eeb-4fe2-9cd4-5ea1428d584b"). InnerVolumeSpecName "kube-api-access-s9tt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.358377 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-builder-dockercfg-56v9d-push" (OuterVolumeSpecName: "builder-dockercfg-56v9d-push") pod "0befdcba-1eeb-4fe2-9cd4-5ea1428d584b" (UID: "0befdcba-1eeb-4fe2-9cd4-5ea1428d584b"). InnerVolumeSpecName "builder-dockercfg-56v9d-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.450666 5107 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.450716 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.450736 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s9tt7\" (UniqueName: \"kubernetes.io/projected/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-kube-api-access-s9tt7\") on node \"crc\" DevicePath \"\"" Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.450753 5107 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.450770 5107 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.450787 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.450804 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-builder-dockercfg-56v9d-push\") on node \"crc\" DevicePath \"\"" Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.450824 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b-builder-dockercfg-56v9d-pull\") on node \"crc\" DevicePath \"\"" Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.801732 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-1-build_0befdcba-1eeb-4fe2-9cd4-5ea1428d584b/docker-build/0.log" Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.802939 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.802984 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"0befdcba-1eeb-4fe2-9cd4-5ea1428d584b","Type":"ContainerDied","Data":"3ed94bdb4e4afba6f4d317bc0134f651039b0ffb3066a90d0328f7b120e8d8df"} Feb 20 00:31:01 crc kubenswrapper[5107]: I0220 00:31:01.803044 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ed94bdb4e4afba6f4d317bc0134f651039b0ffb3066a90d0328f7b120e8d8df" Feb 20 00:31:02 crc kubenswrapper[5107]: I0220 00:31:02.824542 5107 patch_prober.go:28] interesting pod/machine-config-daemon-5bqkx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:31:02 crc kubenswrapper[5107]: I0220 00:31:02.824695 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:31:02 crc kubenswrapper[5107]: I0220 00:31:02.824773 5107 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" Feb 20 00:31:02 crc kubenswrapper[5107]: I0220 00:31:02.825802 5107 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"27058816c5b0f1e08873805991c4c60e645930a52858b50fbcf44e8cd21dad6f"} pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 20 00:31:02 crc kubenswrapper[5107]: I0220 00:31:02.825912 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" containerName="machine-config-daemon" containerID="cri-o://27058816c5b0f1e08873805991c4c60e645930a52858b50fbcf44e8cd21dad6f" gracePeriod=600 Feb 20 00:31:03 crc kubenswrapper[5107]: I0220 00:31:03.826849 5107 generic.go:358] "Generic (PLEG): container finished" podID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" containerID="27058816c5b0f1e08873805991c4c60e645930a52858b50fbcf44e8cd21dad6f" exitCode=0 Feb 20 00:31:03 crc kubenswrapper[5107]: I0220 00:31:03.826923 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" event={"ID":"2a8cc693-438e-4d3b-8865-7d3907f9dc78","Type":"ContainerDied","Data":"27058816c5b0f1e08873805991c4c60e645930a52858b50fbcf44e8cd21dad6f"} Feb 20 00:31:03 crc kubenswrapper[5107]: I0220 00:31:03.828411 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" event={"ID":"2a8cc693-438e-4d3b-8865-7d3907f9dc78","Type":"ContainerStarted","Data":"caab76c5ecaf8c514147de4913329952f9a899f93a1beb03729a59c67867fcdc"} Feb 20 00:31:03 crc kubenswrapper[5107]: I0220 00:31:03.828448 5107 scope.go:117] "RemoveContainer" containerID="0f2d99740a54c1fb08d085d8cf733c3e7a30e596f0b9915e84a2b3b54f15c179" Feb 20 00:31:07 crc kubenswrapper[5107]: I0220 00:31:07.568478 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Feb 20 00:31:07 crc kubenswrapper[5107]: I0220 00:31:07.579270 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Feb 20 00:31:08 crc kubenswrapper[5107]: I0220 00:31:08.495554 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0befdcba-1eeb-4fe2-9cd4-5ea1428d584b" path="/var/lib/kubelet/pods/0befdcba-1eeb-4fe2-9cd4-5ea1428d584b/volumes" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.399040 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-2-build"] Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.400181 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0befdcba-1eeb-4fe2-9cd4-5ea1428d584b" containerName="docker-build" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.400207 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="0befdcba-1eeb-4fe2-9cd4-5ea1428d584b" containerName="docker-build" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.400251 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0befdcba-1eeb-4fe2-9cd4-5ea1428d584b" containerName="manage-dockerfile" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.400264 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="0befdcba-1eeb-4fe2-9cd4-5ea1428d584b" containerName="manage-dockerfile" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.400441 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="0befdcba-1eeb-4fe2-9cd4-5ea1428d584b" containerName="docker-build" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.408193 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.410413 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-56v9d\"" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.411220 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-bundle-2-sys-config\"" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.414092 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-bundle-2-ca\"" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.414368 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-bundle-2-global-ca\"" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.431503 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-2-build"] Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.470195 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9c4f7853-ef9b-4245-97f4-32c388ab8e98-buildcachedir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.470545 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c4f7853-ef9b-4245-97f4-32c388ab8e98-build-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.470614 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/9c4f7853-ef9b-4245-97f4-32c388ab8e98-builder-dockercfg-56v9d-pull\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.470654 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn7gt\" (UniqueName: \"kubernetes.io/projected/9c4f7853-ef9b-4245-97f4-32c388ab8e98-kube-api-access-nn7gt\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.470883 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c4f7853-ef9b-4245-97f4-32c388ab8e98-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.470937 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9c4f7853-ef9b-4245-97f4-32c388ab8e98-container-storage-run\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.471033 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9c4f7853-ef9b-4245-97f4-32c388ab8e98-container-storage-root\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.471062 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9c4f7853-ef9b-4245-97f4-32c388ab8e98-build-system-configs\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.471109 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9c4f7853-ef9b-4245-97f4-32c388ab8e98-build-blob-cache\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.471192 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9c4f7853-ef9b-4245-97f4-32c388ab8e98-node-pullsecrets\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.471244 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/9c4f7853-ef9b-4245-97f4-32c388ab8e98-builder-dockercfg-56v9d-push\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.471318 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9c4f7853-ef9b-4245-97f4-32c388ab8e98-buildworkdir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.573285 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c4f7853-ef9b-4245-97f4-32c388ab8e98-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.573354 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9c4f7853-ef9b-4245-97f4-32c388ab8e98-container-storage-run\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.573415 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9c4f7853-ef9b-4245-97f4-32c388ab8e98-container-storage-root\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.573449 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9c4f7853-ef9b-4245-97f4-32c388ab8e98-build-system-configs\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.573509 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9c4f7853-ef9b-4245-97f4-32c388ab8e98-build-blob-cache\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.573541 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9c4f7853-ef9b-4245-97f4-32c388ab8e98-node-pullsecrets\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.573578 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/9c4f7853-ef9b-4245-97f4-32c388ab8e98-builder-dockercfg-56v9d-push\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.573642 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9c4f7853-ef9b-4245-97f4-32c388ab8e98-buildworkdir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.573679 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9c4f7853-ef9b-4245-97f4-32c388ab8e98-buildcachedir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.573727 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c4f7853-ef9b-4245-97f4-32c388ab8e98-build-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.573772 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/9c4f7853-ef9b-4245-97f4-32c388ab8e98-builder-dockercfg-56v9d-pull\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.573869 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nn7gt\" (UniqueName: \"kubernetes.io/projected/9c4f7853-ef9b-4245-97f4-32c388ab8e98-kube-api-access-nn7gt\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.575011 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9c4f7853-ef9b-4245-97f4-32c388ab8e98-buildcachedir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.575526 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9c4f7853-ef9b-4245-97f4-32c388ab8e98-node-pullsecrets\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.576385 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9c4f7853-ef9b-4245-97f4-32c388ab8e98-build-blob-cache\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.576731 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9c4f7853-ef9b-4245-97f4-32c388ab8e98-container-storage-run\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.576778 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9c4f7853-ef9b-4245-97f4-32c388ab8e98-buildworkdir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.576878 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9c4f7853-ef9b-4245-97f4-32c388ab8e98-container-storage-root\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.577268 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9c4f7853-ef9b-4245-97f4-32c388ab8e98-build-system-configs\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.577891 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c4f7853-ef9b-4245-97f4-32c388ab8e98-build-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.577888 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c4f7853-ef9b-4245-97f4-32c388ab8e98-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.583746 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/9c4f7853-ef9b-4245-97f4-32c388ab8e98-builder-dockercfg-56v9d-push\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.584010 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/9c4f7853-ef9b-4245-97f4-32c388ab8e98-builder-dockercfg-56v9d-pull\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.606046 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn7gt\" (UniqueName: \"kubernetes.io/projected/9c4f7853-ef9b-4245-97f4-32c388ab8e98-kube-api-access-nn7gt\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 20 00:31:09 crc kubenswrapper[5107]: I0220 00:31:09.732857 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 20 00:31:10 crc kubenswrapper[5107]: I0220 00:31:10.015180 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-2-build"] Feb 20 00:31:10 crc kubenswrapper[5107]: W0220 00:31:10.021923 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c4f7853_ef9b_4245_97f4_32c388ab8e98.slice/crio-75d400aac831aeb7ceecefa6fc10fbe8b90dacca3a52433d6fe6df696f65157f WatchSource:0}: Error finding container 75d400aac831aeb7ceecefa6fc10fbe8b90dacca3a52433d6fe6df696f65157f: Status 404 returned error can't find the container with id 75d400aac831aeb7ceecefa6fc10fbe8b90dacca3a52433d6fe6df696f65157f Feb 20 00:31:10 crc kubenswrapper[5107]: I0220 00:31:10.932256 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"9c4f7853-ef9b-4245-97f4-32c388ab8e98","Type":"ContainerStarted","Data":"7ecede36151d064e29350f26305aa0cdc625547b2051c44f78eca038df5dc95e"} Feb 20 00:31:10 crc kubenswrapper[5107]: I0220 00:31:10.932694 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"9c4f7853-ef9b-4245-97f4-32c388ab8e98","Type":"ContainerStarted","Data":"75d400aac831aeb7ceecefa6fc10fbe8b90dacca3a52433d6fe6df696f65157f"} Feb 20 00:31:11 crc kubenswrapper[5107]: I0220 00:31:11.940910 5107 generic.go:358] "Generic (PLEG): container finished" podID="9c4f7853-ef9b-4245-97f4-32c388ab8e98" containerID="7ecede36151d064e29350f26305aa0cdc625547b2051c44f78eca038df5dc95e" exitCode=0 Feb 20 00:31:11 crc kubenswrapper[5107]: I0220 00:31:11.940977 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"9c4f7853-ef9b-4245-97f4-32c388ab8e98","Type":"ContainerDied","Data":"7ecede36151d064e29350f26305aa0cdc625547b2051c44f78eca038df5dc95e"} Feb 20 00:31:12 crc kubenswrapper[5107]: I0220 00:31:12.952838 5107 generic.go:358] "Generic (PLEG): container finished" podID="9c4f7853-ef9b-4245-97f4-32c388ab8e98" containerID="00041990ca6bafa79c9b03d65a226b6c8cc631cf56eaa862a10b69c50b67b470" exitCode=0 Feb 20 00:31:12 crc kubenswrapper[5107]: I0220 00:31:12.952942 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"9c4f7853-ef9b-4245-97f4-32c388ab8e98","Type":"ContainerDied","Data":"00041990ca6bafa79c9b03d65a226b6c8cc631cf56eaa862a10b69c50b67b470"} Feb 20 00:31:13 crc kubenswrapper[5107]: I0220 00:31:13.029832 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-2-build_9c4f7853-ef9b-4245-97f4-32c388ab8e98/manage-dockerfile/0.log" Feb 20 00:31:13 crc kubenswrapper[5107]: I0220 00:31:13.965901 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"9c4f7853-ef9b-4245-97f4-32c388ab8e98","Type":"ContainerStarted","Data":"1e84aec5376c9859f4bf80d9b8159ab4c2f98c0bdf2a1eb7a36bef6228284532"} Feb 20 00:31:14 crc kubenswrapper[5107]: I0220 00:31:14.005686 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-bundle-2-build" podStartSLOduration=5.005668038 podStartE2EDuration="5.005668038s" podCreationTimestamp="2026-02-20 00:31:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:31:14.002819698 +0000 UTC m=+1360.371477264" watchObservedRunningTime="2026-02-20 00:31:14.005668038 +0000 UTC m=+1360.374325604" Feb 20 00:31:18 crc kubenswrapper[5107]: I0220 00:31:18.004633 5107 generic.go:358] "Generic (PLEG): container finished" podID="9c4f7853-ef9b-4245-97f4-32c388ab8e98" containerID="1e84aec5376c9859f4bf80d9b8159ab4c2f98c0bdf2a1eb7a36bef6228284532" exitCode=0 Feb 20 00:31:18 crc kubenswrapper[5107]: I0220 00:31:18.004740 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"9c4f7853-ef9b-4245-97f4-32c388ab8e98","Type":"ContainerDied","Data":"1e84aec5376c9859f4bf80d9b8159ab4c2f98c0bdf2a1eb7a36bef6228284532"} Feb 20 00:31:19 crc kubenswrapper[5107]: I0220 00:31:19.367076 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 20 00:31:19 crc kubenswrapper[5107]: I0220 00:31:19.541016 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9c4f7853-ef9b-4245-97f4-32c388ab8e98-build-blob-cache\") pod \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " Feb 20 00:31:19 crc kubenswrapper[5107]: I0220 00:31:19.541199 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c4f7853-ef9b-4245-97f4-32c388ab8e98-build-proxy-ca-bundles\") pod \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " Feb 20 00:31:19 crc kubenswrapper[5107]: I0220 00:31:19.541267 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nn7gt\" (UniqueName: \"kubernetes.io/projected/9c4f7853-ef9b-4245-97f4-32c388ab8e98-kube-api-access-nn7gt\") pod \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " Feb 20 00:31:19 crc kubenswrapper[5107]: I0220 00:31:19.541301 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9c4f7853-ef9b-4245-97f4-32c388ab8e98-build-system-configs\") pod \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " Feb 20 00:31:19 crc kubenswrapper[5107]: I0220 00:31:19.541336 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9c4f7853-ef9b-4245-97f4-32c388ab8e98-buildworkdir\") pod \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " Feb 20 00:31:19 crc kubenswrapper[5107]: I0220 00:31:19.541445 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9c4f7853-ef9b-4245-97f4-32c388ab8e98-container-storage-root\") pod \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " Feb 20 00:31:19 crc kubenswrapper[5107]: I0220 00:31:19.541502 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c4f7853-ef9b-4245-97f4-32c388ab8e98-build-ca-bundles\") pod \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " Feb 20 00:31:19 crc kubenswrapper[5107]: I0220 00:31:19.541586 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9c4f7853-ef9b-4245-97f4-32c388ab8e98-container-storage-run\") pod \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " Feb 20 00:31:19 crc kubenswrapper[5107]: I0220 00:31:19.541625 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/9c4f7853-ef9b-4245-97f4-32c388ab8e98-builder-dockercfg-56v9d-push\") pod \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " Feb 20 00:31:19 crc kubenswrapper[5107]: I0220 00:31:19.541668 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/9c4f7853-ef9b-4245-97f4-32c388ab8e98-builder-dockercfg-56v9d-pull\") pod \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " Feb 20 00:31:19 crc kubenswrapper[5107]: I0220 00:31:19.541711 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9c4f7853-ef9b-4245-97f4-32c388ab8e98-node-pullsecrets\") pod \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " Feb 20 00:31:19 crc kubenswrapper[5107]: I0220 00:31:19.541747 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9c4f7853-ef9b-4245-97f4-32c388ab8e98-buildcachedir\") pod \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\" (UID: \"9c4f7853-ef9b-4245-97f4-32c388ab8e98\") " Feb 20 00:31:19 crc kubenswrapper[5107]: I0220 00:31:19.542266 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c4f7853-ef9b-4245-97f4-32c388ab8e98-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "9c4f7853-ef9b-4245-97f4-32c388ab8e98" (UID: "9c4f7853-ef9b-4245-97f4-32c388ab8e98"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:31:19 crc kubenswrapper[5107]: I0220 00:31:19.542641 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c4f7853-ef9b-4245-97f4-32c388ab8e98-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "9c4f7853-ef9b-4245-97f4-32c388ab8e98" (UID: "9c4f7853-ef9b-4245-97f4-32c388ab8e98"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:31:19 crc kubenswrapper[5107]: I0220 00:31:19.542416 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c4f7853-ef9b-4245-97f4-32c388ab8e98-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "9c4f7853-ef9b-4245-97f4-32c388ab8e98" (UID: "9c4f7853-ef9b-4245-97f4-32c388ab8e98"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:31:19 crc kubenswrapper[5107]: I0220 00:31:19.543465 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c4f7853-ef9b-4245-97f4-32c388ab8e98-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "9c4f7853-ef9b-4245-97f4-32c388ab8e98" (UID: "9c4f7853-ef9b-4245-97f4-32c388ab8e98"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:31:19 crc kubenswrapper[5107]: I0220 00:31:19.543569 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c4f7853-ef9b-4245-97f4-32c388ab8e98-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "9c4f7853-ef9b-4245-97f4-32c388ab8e98" (UID: "9c4f7853-ef9b-4245-97f4-32c388ab8e98"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:31:19 crc kubenswrapper[5107]: I0220 00:31:19.543687 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c4f7853-ef9b-4245-97f4-32c388ab8e98-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "9c4f7853-ef9b-4245-97f4-32c388ab8e98" (UID: "9c4f7853-ef9b-4245-97f4-32c388ab8e98"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:31:19 crc kubenswrapper[5107]: I0220 00:31:19.544009 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c4f7853-ef9b-4245-97f4-32c388ab8e98-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "9c4f7853-ef9b-4245-97f4-32c388ab8e98" (UID: "9c4f7853-ef9b-4245-97f4-32c388ab8e98"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:31:19 crc kubenswrapper[5107]: I0220 00:31:19.546267 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c4f7853-ef9b-4245-97f4-32c388ab8e98-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "9c4f7853-ef9b-4245-97f4-32c388ab8e98" (UID: "9c4f7853-ef9b-4245-97f4-32c388ab8e98"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:31:19 crc kubenswrapper[5107]: I0220 00:31:19.549584 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c4f7853-ef9b-4245-97f4-32c388ab8e98-builder-dockercfg-56v9d-push" (OuterVolumeSpecName: "builder-dockercfg-56v9d-push") pod "9c4f7853-ef9b-4245-97f4-32c388ab8e98" (UID: "9c4f7853-ef9b-4245-97f4-32c388ab8e98"). InnerVolumeSpecName "builder-dockercfg-56v9d-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:31:19 crc kubenswrapper[5107]: I0220 00:31:19.550019 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c4f7853-ef9b-4245-97f4-32c388ab8e98-builder-dockercfg-56v9d-pull" (OuterVolumeSpecName: "builder-dockercfg-56v9d-pull") pod "9c4f7853-ef9b-4245-97f4-32c388ab8e98" (UID: "9c4f7853-ef9b-4245-97f4-32c388ab8e98"). InnerVolumeSpecName "builder-dockercfg-56v9d-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:31:19 crc kubenswrapper[5107]: I0220 00:31:19.550905 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c4f7853-ef9b-4245-97f4-32c388ab8e98-kube-api-access-nn7gt" (OuterVolumeSpecName: "kube-api-access-nn7gt") pod "9c4f7853-ef9b-4245-97f4-32c388ab8e98" (UID: "9c4f7853-ef9b-4245-97f4-32c388ab8e98"). InnerVolumeSpecName "kube-api-access-nn7gt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:31:19 crc kubenswrapper[5107]: I0220 00:31:19.554751 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c4f7853-ef9b-4245-97f4-32c388ab8e98-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "9c4f7853-ef9b-4245-97f4-32c388ab8e98" (UID: "9c4f7853-ef9b-4245-97f4-32c388ab8e98"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:31:19 crc kubenswrapper[5107]: I0220 00:31:19.644731 5107 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9c4f7853-ef9b-4245-97f4-32c388ab8e98-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 20 00:31:19 crc kubenswrapper[5107]: I0220 00:31:19.644806 5107 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c4f7853-ef9b-4245-97f4-32c388ab8e98-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 20 00:31:19 crc kubenswrapper[5107]: I0220 00:31:19.644838 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nn7gt\" (UniqueName: \"kubernetes.io/projected/9c4f7853-ef9b-4245-97f4-32c388ab8e98-kube-api-access-nn7gt\") on node \"crc\" DevicePath \"\"" Feb 20 00:31:19 crc kubenswrapper[5107]: I0220 00:31:19.644861 5107 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9c4f7853-ef9b-4245-97f4-32c388ab8e98-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 20 00:31:19 crc kubenswrapper[5107]: I0220 00:31:19.644889 5107 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9c4f7853-ef9b-4245-97f4-32c388ab8e98-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 20 00:31:19 crc kubenswrapper[5107]: I0220 00:31:19.644913 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9c4f7853-ef9b-4245-97f4-32c388ab8e98-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 20 00:31:19 crc kubenswrapper[5107]: I0220 00:31:19.644937 5107 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c4f7853-ef9b-4245-97f4-32c388ab8e98-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 20 00:31:19 crc kubenswrapper[5107]: I0220 00:31:19.644964 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9c4f7853-ef9b-4245-97f4-32c388ab8e98-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 20 00:31:19 crc kubenswrapper[5107]: I0220 00:31:19.644989 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/9c4f7853-ef9b-4245-97f4-32c388ab8e98-builder-dockercfg-56v9d-push\") on node \"crc\" DevicePath \"\"" Feb 20 00:31:19 crc kubenswrapper[5107]: I0220 00:31:19.645013 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/9c4f7853-ef9b-4245-97f4-32c388ab8e98-builder-dockercfg-56v9d-pull\") on node \"crc\" DevicePath \"\"" Feb 20 00:31:19 crc kubenswrapper[5107]: I0220 00:31:19.645041 5107 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9c4f7853-ef9b-4245-97f4-32c388ab8e98-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 20 00:31:19 crc kubenswrapper[5107]: I0220 00:31:19.645113 5107 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9c4f7853-ef9b-4245-97f4-32c388ab8e98-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 20 00:31:20 crc kubenswrapper[5107]: I0220 00:31:20.034392 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"9c4f7853-ef9b-4245-97f4-32c388ab8e98","Type":"ContainerDied","Data":"75d400aac831aeb7ceecefa6fc10fbe8b90dacca3a52433d6fe6df696f65157f"} Feb 20 00:31:20 crc kubenswrapper[5107]: I0220 00:31:20.034902 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75d400aac831aeb7ceecefa6fc10fbe8b90dacca3a52433d6fe6df696f65157f" Feb 20 00:31:20 crc kubenswrapper[5107]: I0220 00:31:20.034474 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.264917 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.266407 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9c4f7853-ef9b-4245-97f4-32c388ab8e98" containerName="manage-dockerfile" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.266431 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c4f7853-ef9b-4245-97f4-32c388ab8e98" containerName="manage-dockerfile" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.266487 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9c4f7853-ef9b-4245-97f4-32c388ab8e98" containerName="docker-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.266497 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c4f7853-ef9b-4245-97f4-32c388ab8e98" containerName="docker-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.266512 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9c4f7853-ef9b-4245-97f4-32c388ab8e98" containerName="git-clone" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.266523 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c4f7853-ef9b-4245-97f4-32c388ab8e98" containerName="git-clone" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.266676 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="9c4f7853-ef9b-4245-97f4-32c388ab8e98" containerName="docker-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.276612 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.278552 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-1-ca\"" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.279468 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-1-global-ca\"" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.279472 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-dockercfg\"" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.280810 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-56v9d\"" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.281220 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-1-sys-config\"" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.285476 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.379155 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjrsn\" (UniqueName: \"kubernetes.io/projected/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-kube-api-access-gjrsn\") pod \"service-telemetry-framework-index-1-build\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.379207 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.379226 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.379303 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-builder-dockercfg-56v9d-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.379327 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.379351 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-builder-dockercfg-56v9d-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.379379 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.379416 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.379466 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.379487 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.379506 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.379643 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.379751 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.484351 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-builder-dockercfg-56v9d-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.484399 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.484448 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-builder-dockercfg-56v9d-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.484471 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.484507 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.484554 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.484590 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.484613 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.484655 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.484685 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.484730 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gjrsn\" (UniqueName: \"kubernetes.io/projected/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-kube-api-access-gjrsn\") pod \"service-telemetry-framework-index-1-build\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.484790 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.485024 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.485209 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.485321 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.486201 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.486949 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.487570 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.488162 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.488554 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.489296 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.490301 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.494620 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.496480 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-builder-dockercfg-56v9d-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.500208 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-builder-dockercfg-56v9d-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.509386 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjrsn\" (UniqueName: \"kubernetes.io/projected/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-kube-api-access-gjrsn\") pod \"service-telemetry-framework-index-1-build\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.621033 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:31:36 crc kubenswrapper[5107]: I0220 00:31:36.827716 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Feb 20 00:31:37 crc kubenswrapper[5107]: I0220 00:31:37.178932 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef","Type":"ContainerStarted","Data":"48a01ea5425776c7139ad5a14a0c397b7073a56380be98f4d7315a2ad8a42030"} Feb 20 00:31:38 crc kubenswrapper[5107]: I0220 00:31:38.190470 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef","Type":"ContainerStarted","Data":"a18b20b0a5d0f08ef3a3fc70e1815dabdf191bf8d622dbdf7af9208c42451688"} Feb 20 00:31:39 crc kubenswrapper[5107]: I0220 00:31:39.200021 5107 generic.go:358] "Generic (PLEG): container finished" podID="0698ee7d-f69c-4d05-8889-2fe7cd10c9ef" containerID="a18b20b0a5d0f08ef3a3fc70e1815dabdf191bf8d622dbdf7af9208c42451688" exitCode=0 Feb 20 00:31:39 crc kubenswrapper[5107]: I0220 00:31:39.200107 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef","Type":"ContainerDied","Data":"a18b20b0a5d0f08ef3a3fc70e1815dabdf191bf8d622dbdf7af9208c42451688"} Feb 20 00:31:40 crc kubenswrapper[5107]: I0220 00:31:40.208104 5107 generic.go:358] "Generic (PLEG): container finished" podID="0698ee7d-f69c-4d05-8889-2fe7cd10c9ef" containerID="ea617884e534cc684794871fdb58a84140114c208f051edb23b7ca8327a96948" exitCode=0 Feb 20 00:31:40 crc kubenswrapper[5107]: I0220 00:31:40.208187 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef","Type":"ContainerDied","Data":"ea617884e534cc684794871fdb58a84140114c208f051edb23b7ca8327a96948"} Feb 20 00:31:40 crc kubenswrapper[5107]: I0220 00:31:40.266936 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_0698ee7d-f69c-4d05-8889-2fe7cd10c9ef/manage-dockerfile/0.log" Feb 20 00:31:41 crc kubenswrapper[5107]: I0220 00:31:41.223448 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef","Type":"ContainerStarted","Data":"14859528abfe7b3175a0daee27dd8952ea81fde7d61f86ffec62d5c52dc3726b"} Feb 20 00:31:41 crc kubenswrapper[5107]: I0220 00:31:41.267964 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-framework-index-1-build" podStartSLOduration=5.267934301 podStartE2EDuration="5.267934301s" podCreationTimestamp="2026-02-20 00:31:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:31:41.260329127 +0000 UTC m=+1387.628986703" watchObservedRunningTime="2026-02-20 00:31:41.267934301 +0000 UTC m=+1387.636591877" Feb 20 00:32:00 crc kubenswrapper[5107]: I0220 00:32:00.130897 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29525792-bw2g5"] Feb 20 00:32:00 crc kubenswrapper[5107]: I0220 00:32:00.139135 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525792-bw2g5" Feb 20 00:32:00 crc kubenswrapper[5107]: I0220 00:32:00.143836 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525792-bw2g5"] Feb 20 00:32:00 crc kubenswrapper[5107]: I0220 00:32:00.145751 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 20 00:32:00 crc kubenswrapper[5107]: I0220 00:32:00.145881 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-km7dp\"" Feb 20 00:32:00 crc kubenswrapper[5107]: I0220 00:32:00.145999 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 20 00:32:00 crc kubenswrapper[5107]: I0220 00:32:00.226925 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h46gk\" (UniqueName: \"kubernetes.io/projected/cd49c06d-625b-4221-a044-7ffb4bb94e6c-kube-api-access-h46gk\") pod \"auto-csr-approver-29525792-bw2g5\" (UID: \"cd49c06d-625b-4221-a044-7ffb4bb94e6c\") " pod="openshift-infra/auto-csr-approver-29525792-bw2g5" Feb 20 00:32:00 crc kubenswrapper[5107]: I0220 00:32:00.327667 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h46gk\" (UniqueName: \"kubernetes.io/projected/cd49c06d-625b-4221-a044-7ffb4bb94e6c-kube-api-access-h46gk\") pod \"auto-csr-approver-29525792-bw2g5\" (UID: \"cd49c06d-625b-4221-a044-7ffb4bb94e6c\") " pod="openshift-infra/auto-csr-approver-29525792-bw2g5" Feb 20 00:32:00 crc kubenswrapper[5107]: I0220 00:32:00.347594 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h46gk\" (UniqueName: \"kubernetes.io/projected/cd49c06d-625b-4221-a044-7ffb4bb94e6c-kube-api-access-h46gk\") pod \"auto-csr-approver-29525792-bw2g5\" (UID: \"cd49c06d-625b-4221-a044-7ffb4bb94e6c\") " pod="openshift-infra/auto-csr-approver-29525792-bw2g5" Feb 20 00:32:00 crc kubenswrapper[5107]: I0220 00:32:00.472169 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525792-bw2g5" Feb 20 00:32:00 crc kubenswrapper[5107]: W0220 00:32:00.696748 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd49c06d_625b_4221_a044_7ffb4bb94e6c.slice/crio-5815b72fb2c056b2aa5effe38e8d70a7173e7692634675b4566ce2e59690dc45 WatchSource:0}: Error finding container 5815b72fb2c056b2aa5effe38e8d70a7173e7692634675b4566ce2e59690dc45: Status 404 returned error can't find the container with id 5815b72fb2c056b2aa5effe38e8d70a7173e7692634675b4566ce2e59690dc45 Feb 20 00:32:00 crc kubenswrapper[5107]: I0220 00:32:00.707487 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525792-bw2g5"] Feb 20 00:32:01 crc kubenswrapper[5107]: I0220 00:32:01.383516 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525792-bw2g5" event={"ID":"cd49c06d-625b-4221-a044-7ffb4bb94e6c","Type":"ContainerStarted","Data":"5815b72fb2c056b2aa5effe38e8d70a7173e7692634675b4566ce2e59690dc45"} Feb 20 00:32:02 crc kubenswrapper[5107]: I0220 00:32:02.394268 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525792-bw2g5" event={"ID":"cd49c06d-625b-4221-a044-7ffb4bb94e6c","Type":"ContainerStarted","Data":"b66309aef4fa4bfc55b301e0d9ba510252b0a5386c3917d41bf5036502c35f10"} Feb 20 00:32:02 crc kubenswrapper[5107]: I0220 00:32:02.413678 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29525792-bw2g5" podStartSLOduration=1.232160617 podStartE2EDuration="2.413643919s" podCreationTimestamp="2026-02-20 00:32:00 +0000 UTC" firstStartedPulling="2026-02-20 00:32:00.699357617 +0000 UTC m=+1407.068015183" lastFinishedPulling="2026-02-20 00:32:01.880840909 +0000 UTC m=+1408.249498485" observedRunningTime="2026-02-20 00:32:02.408208777 +0000 UTC m=+1408.776866363" watchObservedRunningTime="2026-02-20 00:32:02.413643919 +0000 UTC m=+1408.782301485" Feb 20 00:32:03 crc kubenswrapper[5107]: I0220 00:32:03.404508 5107 generic.go:358] "Generic (PLEG): container finished" podID="cd49c06d-625b-4221-a044-7ffb4bb94e6c" containerID="b66309aef4fa4bfc55b301e0d9ba510252b0a5386c3917d41bf5036502c35f10" exitCode=0 Feb 20 00:32:03 crc kubenswrapper[5107]: I0220 00:32:03.404580 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525792-bw2g5" event={"ID":"cd49c06d-625b-4221-a044-7ffb4bb94e6c","Type":"ContainerDied","Data":"b66309aef4fa4bfc55b301e0d9ba510252b0a5386c3917d41bf5036502c35f10"} Feb 20 00:32:04 crc kubenswrapper[5107]: I0220 00:32:04.732869 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525792-bw2g5" Feb 20 00:32:04 crc kubenswrapper[5107]: I0220 00:32:04.784199 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h46gk\" (UniqueName: \"kubernetes.io/projected/cd49c06d-625b-4221-a044-7ffb4bb94e6c-kube-api-access-h46gk\") pod \"cd49c06d-625b-4221-a044-7ffb4bb94e6c\" (UID: \"cd49c06d-625b-4221-a044-7ffb4bb94e6c\") " Feb 20 00:32:04 crc kubenswrapper[5107]: I0220 00:32:04.791020 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd49c06d-625b-4221-a044-7ffb4bb94e6c-kube-api-access-h46gk" (OuterVolumeSpecName: "kube-api-access-h46gk") pod "cd49c06d-625b-4221-a044-7ffb4bb94e6c" (UID: "cd49c06d-625b-4221-a044-7ffb4bb94e6c"). InnerVolumeSpecName "kube-api-access-h46gk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:32:04 crc kubenswrapper[5107]: I0220 00:32:04.885666 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h46gk\" (UniqueName: \"kubernetes.io/projected/cd49c06d-625b-4221-a044-7ffb4bb94e6c-kube-api-access-h46gk\") on node \"crc\" DevicePath \"\"" Feb 20 00:32:05 crc kubenswrapper[5107]: I0220 00:32:05.422745 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525792-bw2g5" event={"ID":"cd49c06d-625b-4221-a044-7ffb4bb94e6c","Type":"ContainerDied","Data":"5815b72fb2c056b2aa5effe38e8d70a7173e7692634675b4566ce2e59690dc45"} Feb 20 00:32:05 crc kubenswrapper[5107]: I0220 00:32:05.423473 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5815b72fb2c056b2aa5effe38e8d70a7173e7692634675b4566ce2e59690dc45" Feb 20 00:32:05 crc kubenswrapper[5107]: I0220 00:32:05.423078 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525792-bw2g5" Feb 20 00:32:05 crc kubenswrapper[5107]: I0220 00:32:05.503482 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29525786-5fsgl"] Feb 20 00:32:05 crc kubenswrapper[5107]: I0220 00:32:05.512936 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29525786-5fsgl"] Feb 20 00:32:06 crc kubenswrapper[5107]: I0220 00:32:06.499709 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1275ee6b-e46c-4c79-8c4f-e326dbab2b6b" path="/var/lib/kubelet/pods/1275ee6b-e46c-4c79-8c4f-e326dbab2b6b/volumes" Feb 20 00:32:19 crc kubenswrapper[5107]: I0220 00:32:19.546920 5107 generic.go:358] "Generic (PLEG): container finished" podID="0698ee7d-f69c-4d05-8889-2fe7cd10c9ef" containerID="14859528abfe7b3175a0daee27dd8952ea81fde7d61f86ffec62d5c52dc3726b" exitCode=0 Feb 20 00:32:19 crc kubenswrapper[5107]: I0220 00:32:19.547084 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef","Type":"ContainerDied","Data":"14859528abfe7b3175a0daee27dd8952ea81fde7d61f86ffec62d5c52dc3726b"} Feb 20 00:32:20 crc kubenswrapper[5107]: I0220 00:32:20.866030 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:32:20 crc kubenswrapper[5107]: I0220 00:32:20.949508 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-buildworkdir\") pod \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " Feb 20 00:32:20 crc kubenswrapper[5107]: I0220 00:32:20.949562 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-build-proxy-ca-bundles\") pod \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " Feb 20 00:32:20 crc kubenswrapper[5107]: I0220 00:32:20.949584 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-build-ca-bundles\") pod \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " Feb 20 00:32:20 crc kubenswrapper[5107]: I0220 00:32:20.949605 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-builder-dockercfg-56v9d-push\") pod \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " Feb 20 00:32:20 crc kubenswrapper[5107]: I0220 00:32:20.949642 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-build-blob-cache\") pod \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " Feb 20 00:32:20 crc kubenswrapper[5107]: I0220 00:32:20.949681 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-build-system-configs\") pod \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " Feb 20 00:32:20 crc kubenswrapper[5107]: I0220 00:32:20.949723 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjrsn\" (UniqueName: \"kubernetes.io/projected/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-kube-api-access-gjrsn\") pod \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " Feb 20 00:32:20 crc kubenswrapper[5107]: I0220 00:32:20.949744 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-builder-dockercfg-56v9d-pull\") pod \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " Feb 20 00:32:20 crc kubenswrapper[5107]: I0220 00:32:20.949809 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-container-storage-run\") pod \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " Feb 20 00:32:20 crc kubenswrapper[5107]: I0220 00:32:20.949827 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-container-storage-root\") pod \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " Feb 20 00:32:20 crc kubenswrapper[5107]: I0220 00:32:20.949885 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " Feb 20 00:32:20 crc kubenswrapper[5107]: I0220 00:32:20.949910 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-node-pullsecrets\") pod \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " Feb 20 00:32:20 crc kubenswrapper[5107]: I0220 00:32:20.949947 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-buildcachedir\") pod \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\" (UID: \"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef\") " Feb 20 00:32:20 crc kubenswrapper[5107]: I0220 00:32:20.950269 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "0698ee7d-f69c-4d05-8889-2fe7cd10c9ef" (UID: "0698ee7d-f69c-4d05-8889-2fe7cd10c9ef"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:32:20 crc kubenswrapper[5107]: I0220 00:32:20.951142 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "0698ee7d-f69c-4d05-8889-2fe7cd10c9ef" (UID: "0698ee7d-f69c-4d05-8889-2fe7cd10c9ef"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 20 00:32:20 crc kubenswrapper[5107]: I0220 00:32:20.951799 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "0698ee7d-f69c-4d05-8889-2fe7cd10c9ef" (UID: "0698ee7d-f69c-4d05-8889-2fe7cd10c9ef"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:32:20 crc kubenswrapper[5107]: I0220 00:32:20.952325 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "0698ee7d-f69c-4d05-8889-2fe7cd10c9ef" (UID: "0698ee7d-f69c-4d05-8889-2fe7cd10c9ef"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:32:20 crc kubenswrapper[5107]: I0220 00:32:20.953461 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "0698ee7d-f69c-4d05-8889-2fe7cd10c9ef" (UID: "0698ee7d-f69c-4d05-8889-2fe7cd10c9ef"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:32:20 crc kubenswrapper[5107]: I0220 00:32:20.953469 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "0698ee7d-f69c-4d05-8889-2fe7cd10c9ef" (UID: "0698ee7d-f69c-4d05-8889-2fe7cd10c9ef"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:32:20 crc kubenswrapper[5107]: I0220 00:32:20.954724 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "0698ee7d-f69c-4d05-8889-2fe7cd10c9ef" (UID: "0698ee7d-f69c-4d05-8889-2fe7cd10c9ef"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:32:20 crc kubenswrapper[5107]: I0220 00:32:20.957081 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-builder-dockercfg-56v9d-push" (OuterVolumeSpecName: "builder-dockercfg-56v9d-push") pod "0698ee7d-f69c-4d05-8889-2fe7cd10c9ef" (UID: "0698ee7d-f69c-4d05-8889-2fe7cd10c9ef"). InnerVolumeSpecName "builder-dockercfg-56v9d-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:32:20 crc kubenswrapper[5107]: I0220 00:32:20.957631 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-builder-dockercfg-56v9d-pull" (OuterVolumeSpecName: "builder-dockercfg-56v9d-pull") pod "0698ee7d-f69c-4d05-8889-2fe7cd10c9ef" (UID: "0698ee7d-f69c-4d05-8889-2fe7cd10c9ef"). InnerVolumeSpecName "builder-dockercfg-56v9d-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:32:20 crc kubenswrapper[5107]: I0220 00:32:20.958955 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "0698ee7d-f69c-4d05-8889-2fe7cd10c9ef" (UID: "0698ee7d-f69c-4d05-8889-2fe7cd10c9ef"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:32:20 crc kubenswrapper[5107]: I0220 00:32:20.959410 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-kube-api-access-gjrsn" (OuterVolumeSpecName: "kube-api-access-gjrsn") pod "0698ee7d-f69c-4d05-8889-2fe7cd10c9ef" (UID: "0698ee7d-f69c-4d05-8889-2fe7cd10c9ef"). InnerVolumeSpecName "kube-api-access-gjrsn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:32:21 crc kubenswrapper[5107]: I0220 00:32:21.051457 5107 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 20 00:32:21 crc kubenswrapper[5107]: I0220 00:32:21.051500 5107 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 20 00:32:21 crc kubenswrapper[5107]: I0220 00:32:21.051528 5107 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 20 00:32:21 crc kubenswrapper[5107]: I0220 00:32:21.051544 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-56v9d-push\" (UniqueName: \"kubernetes.io/secret/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-builder-dockercfg-56v9d-push\") on node \"crc\" DevicePath \"\"" Feb 20 00:32:21 crc kubenswrapper[5107]: I0220 00:32:21.051559 5107 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 20 00:32:21 crc kubenswrapper[5107]: I0220 00:32:21.051575 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gjrsn\" (UniqueName: \"kubernetes.io/projected/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-kube-api-access-gjrsn\") on node \"crc\" DevicePath \"\"" Feb 20 00:32:21 crc kubenswrapper[5107]: I0220 00:32:21.051589 5107 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-56v9d-pull\" (UniqueName: \"kubernetes.io/secret/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-builder-dockercfg-56v9d-pull\") on node \"crc\" DevicePath \"\"" Feb 20 00:32:21 crc kubenswrapper[5107]: I0220 00:32:21.051603 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 20 00:32:21 crc kubenswrapper[5107]: I0220 00:32:21.051619 5107 reconciler_common.go:299] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Feb 20 00:32:21 crc kubenswrapper[5107]: I0220 00:32:21.051635 5107 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 20 00:32:21 crc kubenswrapper[5107]: I0220 00:32:21.051651 5107 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 20 00:32:21 crc kubenswrapper[5107]: I0220 00:32:21.192066 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "0698ee7d-f69c-4d05-8889-2fe7cd10c9ef" (UID: "0698ee7d-f69c-4d05-8889-2fe7cd10c9ef"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:32:21 crc kubenswrapper[5107]: I0220 00:32:21.255742 5107 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 20 00:32:21 crc kubenswrapper[5107]: I0220 00:32:21.569893 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"0698ee7d-f69c-4d05-8889-2fe7cd10c9ef","Type":"ContainerDied","Data":"48a01ea5425776c7139ad5a14a0c397b7073a56380be98f4d7315a2ad8a42030"} Feb 20 00:32:21 crc kubenswrapper[5107]: I0220 00:32:21.570520 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48a01ea5425776c7139ad5a14a0c397b7073a56380be98f4d7315a2ad8a42030" Feb 20 00:32:21 crc kubenswrapper[5107]: I0220 00:32:21.570709 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 20 00:32:22 crc kubenswrapper[5107]: I0220 00:32:22.173005 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "0698ee7d-f69c-4d05-8889-2fe7cd10c9ef" (UID: "0698ee7d-f69c-4d05-8889-2fe7cd10c9ef"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:32:22 crc kubenswrapper[5107]: I0220 00:32:22.269979 5107 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0698ee7d-f69c-4d05-8889-2fe7cd10c9ef-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 20 00:32:24 crc kubenswrapper[5107]: I0220 00:32:24.625104 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-xvrkk"] Feb 20 00:32:24 crc kubenswrapper[5107]: I0220 00:32:24.626625 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0698ee7d-f69c-4d05-8889-2fe7cd10c9ef" containerName="docker-build" Feb 20 00:32:24 crc kubenswrapper[5107]: I0220 00:32:24.626651 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="0698ee7d-f69c-4d05-8889-2fe7cd10c9ef" containerName="docker-build" Feb 20 00:32:24 crc kubenswrapper[5107]: I0220 00:32:24.626669 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0698ee7d-f69c-4d05-8889-2fe7cd10c9ef" containerName="manage-dockerfile" Feb 20 00:32:24 crc kubenswrapper[5107]: I0220 00:32:24.626680 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="0698ee7d-f69c-4d05-8889-2fe7cd10c9ef" containerName="manage-dockerfile" Feb 20 00:32:24 crc kubenswrapper[5107]: I0220 00:32:24.626720 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0698ee7d-f69c-4d05-8889-2fe7cd10c9ef" containerName="git-clone" Feb 20 00:32:24 crc kubenswrapper[5107]: I0220 00:32:24.626732 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="0698ee7d-f69c-4d05-8889-2fe7cd10c9ef" containerName="git-clone" Feb 20 00:32:24 crc kubenswrapper[5107]: I0220 00:32:24.626776 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cd49c06d-625b-4221-a044-7ffb4bb94e6c" containerName="oc" Feb 20 00:32:24 crc kubenswrapper[5107]: I0220 00:32:24.626787 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd49c06d-625b-4221-a044-7ffb4bb94e6c" containerName="oc" Feb 20 00:32:24 crc kubenswrapper[5107]: I0220 00:32:24.626958 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="0698ee7d-f69c-4d05-8889-2fe7cd10c9ef" containerName="docker-build" Feb 20 00:32:24 crc kubenswrapper[5107]: I0220 00:32:24.626981 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="cd49c06d-625b-4221-a044-7ffb4bb94e6c" containerName="oc" Feb 20 00:32:25 crc kubenswrapper[5107]: I0220 00:32:25.175657 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-xvrkk"] Feb 20 00:32:25 crc kubenswrapper[5107]: I0220 00:32:25.175831 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-xvrkk" Feb 20 00:32:25 crc kubenswrapper[5107]: I0220 00:32:25.177862 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"infrawatch-operators-dockercfg-vvz7d\"" Feb 20 00:32:25 crc kubenswrapper[5107]: I0220 00:32:25.226516 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqtrb\" (UniqueName: \"kubernetes.io/projected/f371dfba-5e05-49f3-9fa4-3524cea79b31-kube-api-access-rqtrb\") pod \"infrawatch-operators-xvrkk\" (UID: \"f371dfba-5e05-49f3-9fa4-3524cea79b31\") " pod="service-telemetry/infrawatch-operators-xvrkk" Feb 20 00:32:25 crc kubenswrapper[5107]: I0220 00:32:25.327881 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rqtrb\" (UniqueName: \"kubernetes.io/projected/f371dfba-5e05-49f3-9fa4-3524cea79b31-kube-api-access-rqtrb\") pod \"infrawatch-operators-xvrkk\" (UID: \"f371dfba-5e05-49f3-9fa4-3524cea79b31\") " pod="service-telemetry/infrawatch-operators-xvrkk" Feb 20 00:32:25 crc kubenswrapper[5107]: I0220 00:32:25.347854 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqtrb\" (UniqueName: \"kubernetes.io/projected/f371dfba-5e05-49f3-9fa4-3524cea79b31-kube-api-access-rqtrb\") pod \"infrawatch-operators-xvrkk\" (UID: \"f371dfba-5e05-49f3-9fa4-3524cea79b31\") " pod="service-telemetry/infrawatch-operators-xvrkk" Feb 20 00:32:25 crc kubenswrapper[5107]: I0220 00:32:25.496095 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-xvrkk" Feb 20 00:32:25 crc kubenswrapper[5107]: I0220 00:32:25.770727 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-xvrkk"] Feb 20 00:32:26 crc kubenswrapper[5107]: I0220 00:32:26.627242 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-xvrkk" event={"ID":"f371dfba-5e05-49f3-9fa4-3524cea79b31","Type":"ContainerStarted","Data":"8a59342e67d200dcd92bdd792b23d2681d652a0f94e19b4914ef75fdec4d3aa3"} Feb 20 00:32:30 crc kubenswrapper[5107]: I0220 00:32:30.004442 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-xvrkk"] Feb 20 00:32:30 crc kubenswrapper[5107]: I0220 00:32:30.804132 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-rpcm9"] Feb 20 00:32:30 crc kubenswrapper[5107]: I0220 00:32:30.825969 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-rpcm9"] Feb 20 00:32:30 crc kubenswrapper[5107]: I0220 00:32:30.826075 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-rpcm9" Feb 20 00:32:30 crc kubenswrapper[5107]: I0220 00:32:30.928863 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtp4p\" (UniqueName: \"kubernetes.io/projected/41920df7-b08a-4f39-a767-a5853d7b2f31-kube-api-access-vtp4p\") pod \"infrawatch-operators-rpcm9\" (UID: \"41920df7-b08a-4f39-a767-a5853d7b2f31\") " pod="service-telemetry/infrawatch-operators-rpcm9" Feb 20 00:32:31 crc kubenswrapper[5107]: I0220 00:32:31.029958 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vtp4p\" (UniqueName: \"kubernetes.io/projected/41920df7-b08a-4f39-a767-a5853d7b2f31-kube-api-access-vtp4p\") pod \"infrawatch-operators-rpcm9\" (UID: \"41920df7-b08a-4f39-a767-a5853d7b2f31\") " pod="service-telemetry/infrawatch-operators-rpcm9" Feb 20 00:32:31 crc kubenswrapper[5107]: I0220 00:32:31.055400 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtp4p\" (UniqueName: \"kubernetes.io/projected/41920df7-b08a-4f39-a767-a5853d7b2f31-kube-api-access-vtp4p\") pod \"infrawatch-operators-rpcm9\" (UID: \"41920df7-b08a-4f39-a767-a5853d7b2f31\") " pod="service-telemetry/infrawatch-operators-rpcm9" Feb 20 00:32:31 crc kubenswrapper[5107]: I0220 00:32:31.141573 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-rpcm9" Feb 20 00:32:35 crc kubenswrapper[5107]: I0220 00:32:35.742781 5107 scope.go:117] "RemoveContainer" containerID="1fd70078f084d376ffcce1ca90d8a0e161a3a01cf7d0f62347b4fc1be9ab24b8" Feb 20 00:32:37 crc kubenswrapper[5107]: I0220 00:32:37.251038 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-rpcm9"] Feb 20 00:32:37 crc kubenswrapper[5107]: W0220 00:32:37.275298 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41920df7_b08a_4f39_a767_a5853d7b2f31.slice/crio-34bd42e6aaf578468f4f3bad91f7a7b230066cd6a5e0eb26361e4d3785960916 WatchSource:0}: Error finding container 34bd42e6aaf578468f4f3bad91f7a7b230066cd6a5e0eb26361e4d3785960916: Status 404 returned error can't find the container with id 34bd42e6aaf578468f4f3bad91f7a7b230066cd6a5e0eb26361e4d3785960916 Feb 20 00:32:37 crc kubenswrapper[5107]: I0220 00:32:37.727879 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-rpcm9" event={"ID":"41920df7-b08a-4f39-a767-a5853d7b2f31","Type":"ContainerStarted","Data":"c4cc77426c2ea3d43db91dfa70f757084117b02046159bb4b6da09fbdaab9107"} Feb 20 00:32:37 crc kubenswrapper[5107]: I0220 00:32:37.728466 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-rpcm9" event={"ID":"41920df7-b08a-4f39-a767-a5853d7b2f31","Type":"ContainerStarted","Data":"34bd42e6aaf578468f4f3bad91f7a7b230066cd6a5e0eb26361e4d3785960916"} Feb 20 00:32:37 crc kubenswrapper[5107]: I0220 00:32:37.730860 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-xvrkk" event={"ID":"f371dfba-5e05-49f3-9fa4-3524cea79b31","Type":"ContainerStarted","Data":"6e70cf8658a3e5814d4e350ff5bc86df64652f52158d217b4cab830275dcc5cc"} Feb 20 00:32:37 crc kubenswrapper[5107]: I0220 00:32:37.730973 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-xvrkk" podUID="f371dfba-5e05-49f3-9fa4-3524cea79b31" containerName="registry-server" containerID="cri-o://6e70cf8658a3e5814d4e350ff5bc86df64652f52158d217b4cab830275dcc5cc" gracePeriod=2 Feb 20 00:32:37 crc kubenswrapper[5107]: I0220 00:32:37.749900 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-rpcm9" podStartSLOduration=7.63365616 podStartE2EDuration="7.749874428s" podCreationTimestamp="2026-02-20 00:32:30 +0000 UTC" firstStartedPulling="2026-02-20 00:32:37.277246857 +0000 UTC m=+1443.645904433" lastFinishedPulling="2026-02-20 00:32:37.393465125 +0000 UTC m=+1443.762122701" observedRunningTime="2026-02-20 00:32:37.748300144 +0000 UTC m=+1444.116957720" watchObservedRunningTime="2026-02-20 00:32:37.749874428 +0000 UTC m=+1444.118532034" Feb 20 00:32:37 crc kubenswrapper[5107]: I0220 00:32:37.776128 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-xvrkk" podStartSLOduration=2.397820457 podStartE2EDuration="13.776103186s" podCreationTimestamp="2026-02-20 00:32:24 +0000 UTC" firstStartedPulling="2026-02-20 00:32:25.780594339 +0000 UTC m=+1432.149251905" lastFinishedPulling="2026-02-20 00:32:37.158877028 +0000 UTC m=+1443.527534634" observedRunningTime="2026-02-20 00:32:37.769105079 +0000 UTC m=+1444.137762695" watchObservedRunningTime="2026-02-20 00:32:37.776103186 +0000 UTC m=+1444.144760782" Feb 20 00:32:38 crc kubenswrapper[5107]: I0220 00:32:38.183818 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-xvrkk" Feb 20 00:32:38 crc kubenswrapper[5107]: I0220 00:32:38.235368 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqtrb\" (UniqueName: \"kubernetes.io/projected/f371dfba-5e05-49f3-9fa4-3524cea79b31-kube-api-access-rqtrb\") pod \"f371dfba-5e05-49f3-9fa4-3524cea79b31\" (UID: \"f371dfba-5e05-49f3-9fa4-3524cea79b31\") " Feb 20 00:32:38 crc kubenswrapper[5107]: I0220 00:32:38.242361 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f371dfba-5e05-49f3-9fa4-3524cea79b31-kube-api-access-rqtrb" (OuterVolumeSpecName: "kube-api-access-rqtrb") pod "f371dfba-5e05-49f3-9fa4-3524cea79b31" (UID: "f371dfba-5e05-49f3-9fa4-3524cea79b31"). InnerVolumeSpecName "kube-api-access-rqtrb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:32:38 crc kubenswrapper[5107]: I0220 00:32:38.337809 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rqtrb\" (UniqueName: \"kubernetes.io/projected/f371dfba-5e05-49f3-9fa4-3524cea79b31-kube-api-access-rqtrb\") on node \"crc\" DevicePath \"\"" Feb 20 00:32:38 crc kubenswrapper[5107]: I0220 00:32:38.741503 5107 generic.go:358] "Generic (PLEG): container finished" podID="f371dfba-5e05-49f3-9fa4-3524cea79b31" containerID="6e70cf8658a3e5814d4e350ff5bc86df64652f52158d217b4cab830275dcc5cc" exitCode=0 Feb 20 00:32:38 crc kubenswrapper[5107]: I0220 00:32:38.741630 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-xvrkk" event={"ID":"f371dfba-5e05-49f3-9fa4-3524cea79b31","Type":"ContainerDied","Data":"6e70cf8658a3e5814d4e350ff5bc86df64652f52158d217b4cab830275dcc5cc"} Feb 20 00:32:38 crc kubenswrapper[5107]: I0220 00:32:38.741697 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-xvrkk" event={"ID":"f371dfba-5e05-49f3-9fa4-3524cea79b31","Type":"ContainerDied","Data":"8a59342e67d200dcd92bdd792b23d2681d652a0f94e19b4914ef75fdec4d3aa3"} Feb 20 00:32:38 crc kubenswrapper[5107]: I0220 00:32:38.741730 5107 scope.go:117] "RemoveContainer" containerID="6e70cf8658a3e5814d4e350ff5bc86df64652f52158d217b4cab830275dcc5cc" Feb 20 00:32:38 crc kubenswrapper[5107]: I0220 00:32:38.741648 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-xvrkk" Feb 20 00:32:38 crc kubenswrapper[5107]: I0220 00:32:38.769727 5107 scope.go:117] "RemoveContainer" containerID="6e70cf8658a3e5814d4e350ff5bc86df64652f52158d217b4cab830275dcc5cc" Feb 20 00:32:38 crc kubenswrapper[5107]: E0220 00:32:38.771066 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e70cf8658a3e5814d4e350ff5bc86df64652f52158d217b4cab830275dcc5cc\": container with ID starting with 6e70cf8658a3e5814d4e350ff5bc86df64652f52158d217b4cab830275dcc5cc not found: ID does not exist" containerID="6e70cf8658a3e5814d4e350ff5bc86df64652f52158d217b4cab830275dcc5cc" Feb 20 00:32:38 crc kubenswrapper[5107]: I0220 00:32:38.771107 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e70cf8658a3e5814d4e350ff5bc86df64652f52158d217b4cab830275dcc5cc"} err="failed to get container status \"6e70cf8658a3e5814d4e350ff5bc86df64652f52158d217b4cab830275dcc5cc\": rpc error: code = NotFound desc = could not find container \"6e70cf8658a3e5814d4e350ff5bc86df64652f52158d217b4cab830275dcc5cc\": container with ID starting with 6e70cf8658a3e5814d4e350ff5bc86df64652f52158d217b4cab830275dcc5cc not found: ID does not exist" Feb 20 00:32:38 crc kubenswrapper[5107]: I0220 00:32:38.773169 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-xvrkk"] Feb 20 00:32:38 crc kubenswrapper[5107]: I0220 00:32:38.783049 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-xvrkk"] Feb 20 00:32:40 crc kubenswrapper[5107]: I0220 00:32:40.494914 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f371dfba-5e05-49f3-9fa4-3524cea79b31" path="/var/lib/kubelet/pods/f371dfba-5e05-49f3-9fa4-3524cea79b31/volumes" Feb 20 00:32:41 crc kubenswrapper[5107]: I0220 00:32:41.142788 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/infrawatch-operators-rpcm9" Feb 20 00:32:41 crc kubenswrapper[5107]: I0220 00:32:41.143493 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-rpcm9" Feb 20 00:32:41 crc kubenswrapper[5107]: I0220 00:32:41.187462 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-rpcm9" Feb 20 00:32:42 crc kubenswrapper[5107]: I0220 00:32:42.841533 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-rpcm9" Feb 20 00:32:46 crc kubenswrapper[5107]: I0220 00:32:46.882128 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09qccjj"] Feb 20 00:32:46 crc kubenswrapper[5107]: I0220 00:32:46.884475 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f371dfba-5e05-49f3-9fa4-3524cea79b31" containerName="registry-server" Feb 20 00:32:46 crc kubenswrapper[5107]: I0220 00:32:46.884515 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="f371dfba-5e05-49f3-9fa4-3524cea79b31" containerName="registry-server" Feb 20 00:32:46 crc kubenswrapper[5107]: I0220 00:32:46.884742 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="f371dfba-5e05-49f3-9fa4-3524cea79b31" containerName="registry-server" Feb 20 00:32:46 crc kubenswrapper[5107]: I0220 00:32:46.891827 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09qccjj" Feb 20 00:32:46 crc kubenswrapper[5107]: I0220 00:32:46.906202 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09qccjj"] Feb 20 00:32:46 crc kubenswrapper[5107]: I0220 00:32:46.970911 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhb45\" (UniqueName: \"kubernetes.io/projected/f0388007-94ea-4ad7-b67f-b00bd73d9520-kube-api-access-hhb45\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09qccjj\" (UID: \"f0388007-94ea-4ad7-b67f-b00bd73d9520\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09qccjj" Feb 20 00:32:46 crc kubenswrapper[5107]: I0220 00:32:46.971096 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f0388007-94ea-4ad7-b67f-b00bd73d9520-bundle\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09qccjj\" (UID: \"f0388007-94ea-4ad7-b67f-b00bd73d9520\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09qccjj" Feb 20 00:32:46 crc kubenswrapper[5107]: I0220 00:32:46.971320 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f0388007-94ea-4ad7-b67f-b00bd73d9520-util\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09qccjj\" (UID: \"f0388007-94ea-4ad7-b67f-b00bd73d9520\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09qccjj" Feb 20 00:32:47 crc kubenswrapper[5107]: I0220 00:32:47.073532 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hhb45\" (UniqueName: \"kubernetes.io/projected/f0388007-94ea-4ad7-b67f-b00bd73d9520-kube-api-access-hhb45\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09qccjj\" (UID: \"f0388007-94ea-4ad7-b67f-b00bd73d9520\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09qccjj" Feb 20 00:32:47 crc kubenswrapper[5107]: I0220 00:32:47.073703 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f0388007-94ea-4ad7-b67f-b00bd73d9520-bundle\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09qccjj\" (UID: \"f0388007-94ea-4ad7-b67f-b00bd73d9520\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09qccjj" Feb 20 00:32:47 crc kubenswrapper[5107]: I0220 00:32:47.073783 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f0388007-94ea-4ad7-b67f-b00bd73d9520-util\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09qccjj\" (UID: \"f0388007-94ea-4ad7-b67f-b00bd73d9520\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09qccjj" Feb 20 00:32:47 crc kubenswrapper[5107]: I0220 00:32:47.074652 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f0388007-94ea-4ad7-b67f-b00bd73d9520-bundle\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09qccjj\" (UID: \"f0388007-94ea-4ad7-b67f-b00bd73d9520\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09qccjj" Feb 20 00:32:47 crc kubenswrapper[5107]: I0220 00:32:47.074773 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f0388007-94ea-4ad7-b67f-b00bd73d9520-util\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09qccjj\" (UID: \"f0388007-94ea-4ad7-b67f-b00bd73d9520\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09qccjj" Feb 20 00:32:47 crc kubenswrapper[5107]: I0220 00:32:47.107215 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhb45\" (UniqueName: \"kubernetes.io/projected/f0388007-94ea-4ad7-b67f-b00bd73d9520-kube-api-access-hhb45\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09qccjj\" (UID: \"f0388007-94ea-4ad7-b67f-b00bd73d9520\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09qccjj" Feb 20 00:32:47 crc kubenswrapper[5107]: I0220 00:32:47.215179 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09qccjj" Feb 20 00:32:47 crc kubenswrapper[5107]: I0220 00:32:47.511182 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09qccjj"] Feb 20 00:32:47 crc kubenswrapper[5107]: I0220 00:32:47.662863 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65asncmb"] Feb 20 00:32:47 crc kubenswrapper[5107]: I0220 00:32:47.671407 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65asncmb" Feb 20 00:32:47 crc kubenswrapper[5107]: I0220 00:32:47.683841 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65asncmb"] Feb 20 00:32:47 crc kubenswrapper[5107]: I0220 00:32:47.785801 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/24a2ad6f-4207-484e-b1a8-6b1c947e7872-bundle\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65asncmb\" (UID: \"24a2ad6f-4207-484e-b1a8-6b1c947e7872\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65asncmb" Feb 20 00:32:47 crc kubenswrapper[5107]: I0220 00:32:47.785846 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n8wf\" (UniqueName: \"kubernetes.io/projected/24a2ad6f-4207-484e-b1a8-6b1c947e7872-kube-api-access-8n8wf\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65asncmb\" (UID: \"24a2ad6f-4207-484e-b1a8-6b1c947e7872\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65asncmb" Feb 20 00:32:47 crc kubenswrapper[5107]: I0220 00:32:47.786512 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/24a2ad6f-4207-484e-b1a8-6b1c947e7872-util\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65asncmb\" (UID: \"24a2ad6f-4207-484e-b1a8-6b1c947e7872\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65asncmb" Feb 20 00:32:47 crc kubenswrapper[5107]: I0220 00:32:47.848951 5107 generic.go:358] "Generic (PLEG): container finished" podID="f0388007-94ea-4ad7-b67f-b00bd73d9520" containerID="876ca9067af19e4fa84b2e629e1d065220cf38d9ffea6f9a0ea5197e27ec68e7" exitCode=0 Feb 20 00:32:47 crc kubenswrapper[5107]: I0220 00:32:47.849054 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09qccjj" event={"ID":"f0388007-94ea-4ad7-b67f-b00bd73d9520","Type":"ContainerDied","Data":"876ca9067af19e4fa84b2e629e1d065220cf38d9ffea6f9a0ea5197e27ec68e7"} Feb 20 00:32:47 crc kubenswrapper[5107]: I0220 00:32:47.849103 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09qccjj" event={"ID":"f0388007-94ea-4ad7-b67f-b00bd73d9520","Type":"ContainerStarted","Data":"7ff1212279749b7fecafa6597bbb02f423df5747c7e92ff61f53d4cced72b975"} Feb 20 00:32:47 crc kubenswrapper[5107]: I0220 00:32:47.887708 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/24a2ad6f-4207-484e-b1a8-6b1c947e7872-util\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65asncmb\" (UID: \"24a2ad6f-4207-484e-b1a8-6b1c947e7872\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65asncmb" Feb 20 00:32:47 crc kubenswrapper[5107]: I0220 00:32:47.887766 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/24a2ad6f-4207-484e-b1a8-6b1c947e7872-bundle\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65asncmb\" (UID: \"24a2ad6f-4207-484e-b1a8-6b1c947e7872\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65asncmb" Feb 20 00:32:47 crc kubenswrapper[5107]: I0220 00:32:47.887791 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8n8wf\" (UniqueName: \"kubernetes.io/projected/24a2ad6f-4207-484e-b1a8-6b1c947e7872-kube-api-access-8n8wf\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65asncmb\" (UID: \"24a2ad6f-4207-484e-b1a8-6b1c947e7872\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65asncmb" Feb 20 00:32:47 crc kubenswrapper[5107]: I0220 00:32:47.888523 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/24a2ad6f-4207-484e-b1a8-6b1c947e7872-util\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65asncmb\" (UID: \"24a2ad6f-4207-484e-b1a8-6b1c947e7872\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65asncmb" Feb 20 00:32:47 crc kubenswrapper[5107]: I0220 00:32:47.888866 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/24a2ad6f-4207-484e-b1a8-6b1c947e7872-bundle\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65asncmb\" (UID: \"24a2ad6f-4207-484e-b1a8-6b1c947e7872\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65asncmb" Feb 20 00:32:47 crc kubenswrapper[5107]: I0220 00:32:47.908018 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n8wf\" (UniqueName: \"kubernetes.io/projected/24a2ad6f-4207-484e-b1a8-6b1c947e7872-kube-api-access-8n8wf\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65asncmb\" (UID: \"24a2ad6f-4207-484e-b1a8-6b1c947e7872\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65asncmb" Feb 20 00:32:48 crc kubenswrapper[5107]: I0220 00:32:48.002368 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65asncmb" Feb 20 00:32:48 crc kubenswrapper[5107]: I0220 00:32:48.392583 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65asncmb"] Feb 20 00:32:48 crc kubenswrapper[5107]: W0220 00:32:48.423656 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24a2ad6f_4207_484e_b1a8_6b1c947e7872.slice/crio-a4a5d7c4892090bc869fb814e3f6d4707728537174b506ca95f8fd1abd4e5597 WatchSource:0}: Error finding container a4a5d7c4892090bc869fb814e3f6d4707728537174b506ca95f8fd1abd4e5597: Status 404 returned error can't find the container with id a4a5d7c4892090bc869fb814e3f6d4707728537174b506ca95f8fd1abd4e5597 Feb 20 00:32:48 crc kubenswrapper[5107]: I0220 00:32:48.862625 5107 generic.go:358] "Generic (PLEG): container finished" podID="24a2ad6f-4207-484e-b1a8-6b1c947e7872" containerID="aa7a40a23807c29f55ce5e2cf79018f6df125e2351419dd1362dbebd894f38b6" exitCode=0 Feb 20 00:32:48 crc kubenswrapper[5107]: I0220 00:32:48.862778 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65asncmb" event={"ID":"24a2ad6f-4207-484e-b1a8-6b1c947e7872","Type":"ContainerDied","Data":"aa7a40a23807c29f55ce5e2cf79018f6df125e2351419dd1362dbebd894f38b6"} Feb 20 00:32:48 crc kubenswrapper[5107]: I0220 00:32:48.863247 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65asncmb" event={"ID":"24a2ad6f-4207-484e-b1a8-6b1c947e7872","Type":"ContainerStarted","Data":"a4a5d7c4892090bc869fb814e3f6d4707728537174b506ca95f8fd1abd4e5597"} Feb 20 00:32:48 crc kubenswrapper[5107]: I0220 00:32:48.867339 5107 generic.go:358] "Generic (PLEG): container finished" podID="f0388007-94ea-4ad7-b67f-b00bd73d9520" containerID="42fd3ace39570d33561dcf09dffd513538db7fac5460df82110f8fff67908c87" exitCode=0 Feb 20 00:32:48 crc kubenswrapper[5107]: I0220 00:32:48.867419 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09qccjj" event={"ID":"f0388007-94ea-4ad7-b67f-b00bd73d9520","Type":"ContainerDied","Data":"42fd3ace39570d33561dcf09dffd513538db7fac5460df82110f8fff67908c87"} Feb 20 00:32:49 crc kubenswrapper[5107]: I0220 00:32:49.878221 5107 generic.go:358] "Generic (PLEG): container finished" podID="24a2ad6f-4207-484e-b1a8-6b1c947e7872" containerID="e268c7fe46cad43afddb36a467975e7bd5e44bf802aa3617bc1dc6f486a7438f" exitCode=0 Feb 20 00:32:49 crc kubenswrapper[5107]: I0220 00:32:49.878466 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65asncmb" event={"ID":"24a2ad6f-4207-484e-b1a8-6b1c947e7872","Type":"ContainerDied","Data":"e268c7fe46cad43afddb36a467975e7bd5e44bf802aa3617bc1dc6f486a7438f"} Feb 20 00:32:49 crc kubenswrapper[5107]: I0220 00:32:49.882483 5107 generic.go:358] "Generic (PLEG): container finished" podID="f0388007-94ea-4ad7-b67f-b00bd73d9520" containerID="13a301a37da47be61fd4cd707c279d22ea8287f09588a69f7123f48dd2f7be46" exitCode=0 Feb 20 00:32:49 crc kubenswrapper[5107]: I0220 00:32:49.882623 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09qccjj" event={"ID":"f0388007-94ea-4ad7-b67f-b00bd73d9520","Type":"ContainerDied","Data":"13a301a37da47be61fd4cd707c279d22ea8287f09588a69f7123f48dd2f7be46"} Feb 20 00:32:50 crc kubenswrapper[5107]: I0220 00:32:50.896545 5107 generic.go:358] "Generic (PLEG): container finished" podID="24a2ad6f-4207-484e-b1a8-6b1c947e7872" containerID="09b368e58d27e75b2246ac8c08812cdeb92d893fe9691ce468f47c9cdbb4cc1e" exitCode=0 Feb 20 00:32:50 crc kubenswrapper[5107]: I0220 00:32:50.896608 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65asncmb" event={"ID":"24a2ad6f-4207-484e-b1a8-6b1c947e7872","Type":"ContainerDied","Data":"09b368e58d27e75b2246ac8c08812cdeb92d893fe9691ce468f47c9cdbb4cc1e"} Feb 20 00:32:51 crc kubenswrapper[5107]: I0220 00:32:51.262645 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09qccjj" Feb 20 00:32:51 crc kubenswrapper[5107]: I0220 00:32:51.359109 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f0388007-94ea-4ad7-b67f-b00bd73d9520-bundle\") pod \"f0388007-94ea-4ad7-b67f-b00bd73d9520\" (UID: \"f0388007-94ea-4ad7-b67f-b00bd73d9520\") " Feb 20 00:32:51 crc kubenswrapper[5107]: I0220 00:32:51.359265 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f0388007-94ea-4ad7-b67f-b00bd73d9520-util\") pod \"f0388007-94ea-4ad7-b67f-b00bd73d9520\" (UID: \"f0388007-94ea-4ad7-b67f-b00bd73d9520\") " Feb 20 00:32:51 crc kubenswrapper[5107]: I0220 00:32:51.359305 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhb45\" (UniqueName: \"kubernetes.io/projected/f0388007-94ea-4ad7-b67f-b00bd73d9520-kube-api-access-hhb45\") pod \"f0388007-94ea-4ad7-b67f-b00bd73d9520\" (UID: \"f0388007-94ea-4ad7-b67f-b00bd73d9520\") " Feb 20 00:32:51 crc kubenswrapper[5107]: I0220 00:32:51.360123 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0388007-94ea-4ad7-b67f-b00bd73d9520-bundle" (OuterVolumeSpecName: "bundle") pod "f0388007-94ea-4ad7-b67f-b00bd73d9520" (UID: "f0388007-94ea-4ad7-b67f-b00bd73d9520"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:32:51 crc kubenswrapper[5107]: I0220 00:32:51.368596 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0388007-94ea-4ad7-b67f-b00bd73d9520-kube-api-access-hhb45" (OuterVolumeSpecName: "kube-api-access-hhb45") pod "f0388007-94ea-4ad7-b67f-b00bd73d9520" (UID: "f0388007-94ea-4ad7-b67f-b00bd73d9520"). InnerVolumeSpecName "kube-api-access-hhb45". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:32:51 crc kubenswrapper[5107]: I0220 00:32:51.377829 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0388007-94ea-4ad7-b67f-b00bd73d9520-util" (OuterVolumeSpecName: "util") pod "f0388007-94ea-4ad7-b67f-b00bd73d9520" (UID: "f0388007-94ea-4ad7-b67f-b00bd73d9520"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:32:51 crc kubenswrapper[5107]: I0220 00:32:51.460661 5107 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f0388007-94ea-4ad7-b67f-b00bd73d9520-bundle\") on node \"crc\" DevicePath \"\"" Feb 20 00:32:51 crc kubenswrapper[5107]: I0220 00:32:51.460714 5107 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f0388007-94ea-4ad7-b67f-b00bd73d9520-util\") on node \"crc\" DevicePath \"\"" Feb 20 00:32:51 crc kubenswrapper[5107]: I0220 00:32:51.460731 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hhb45\" (UniqueName: \"kubernetes.io/projected/f0388007-94ea-4ad7-b67f-b00bd73d9520-kube-api-access-hhb45\") on node \"crc\" DevicePath \"\"" Feb 20 00:32:51 crc kubenswrapper[5107]: I0220 00:32:51.910589 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09qccjj" event={"ID":"f0388007-94ea-4ad7-b67f-b00bd73d9520","Type":"ContainerDied","Data":"7ff1212279749b7fecafa6597bbb02f423df5747c7e92ff61f53d4cced72b975"} Feb 20 00:32:51 crc kubenswrapper[5107]: I0220 00:32:51.910654 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ff1212279749b7fecafa6597bbb02f423df5747c7e92ff61f53d4cced72b975" Feb 20 00:32:51 crc kubenswrapper[5107]: I0220 00:32:51.910703 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09qccjj" Feb 20 00:32:52 crc kubenswrapper[5107]: I0220 00:32:52.237379 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65asncmb" Feb 20 00:32:52 crc kubenswrapper[5107]: I0220 00:32:52.385695 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/24a2ad6f-4207-484e-b1a8-6b1c947e7872-bundle\") pod \"24a2ad6f-4207-484e-b1a8-6b1c947e7872\" (UID: \"24a2ad6f-4207-484e-b1a8-6b1c947e7872\") " Feb 20 00:32:52 crc kubenswrapper[5107]: I0220 00:32:52.385813 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n8wf\" (UniqueName: \"kubernetes.io/projected/24a2ad6f-4207-484e-b1a8-6b1c947e7872-kube-api-access-8n8wf\") pod \"24a2ad6f-4207-484e-b1a8-6b1c947e7872\" (UID: \"24a2ad6f-4207-484e-b1a8-6b1c947e7872\") " Feb 20 00:32:52 crc kubenswrapper[5107]: I0220 00:32:52.385977 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/24a2ad6f-4207-484e-b1a8-6b1c947e7872-util\") pod \"24a2ad6f-4207-484e-b1a8-6b1c947e7872\" (UID: \"24a2ad6f-4207-484e-b1a8-6b1c947e7872\") " Feb 20 00:32:52 crc kubenswrapper[5107]: I0220 00:32:52.386680 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24a2ad6f-4207-484e-b1a8-6b1c947e7872-bundle" (OuterVolumeSpecName: "bundle") pod "24a2ad6f-4207-484e-b1a8-6b1c947e7872" (UID: "24a2ad6f-4207-484e-b1a8-6b1c947e7872"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:32:52 crc kubenswrapper[5107]: I0220 00:32:52.391615 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24a2ad6f-4207-484e-b1a8-6b1c947e7872-kube-api-access-8n8wf" (OuterVolumeSpecName: "kube-api-access-8n8wf") pod "24a2ad6f-4207-484e-b1a8-6b1c947e7872" (UID: "24a2ad6f-4207-484e-b1a8-6b1c947e7872"). InnerVolumeSpecName "kube-api-access-8n8wf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:32:52 crc kubenswrapper[5107]: I0220 00:32:52.398545 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24a2ad6f-4207-484e-b1a8-6b1c947e7872-util" (OuterVolumeSpecName: "util") pod "24a2ad6f-4207-484e-b1a8-6b1c947e7872" (UID: "24a2ad6f-4207-484e-b1a8-6b1c947e7872"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:32:52 crc kubenswrapper[5107]: I0220 00:32:52.487306 5107 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/24a2ad6f-4207-484e-b1a8-6b1c947e7872-util\") on node \"crc\" DevicePath \"\"" Feb 20 00:32:52 crc kubenswrapper[5107]: I0220 00:32:52.487366 5107 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/24a2ad6f-4207-484e-b1a8-6b1c947e7872-bundle\") on node \"crc\" DevicePath \"\"" Feb 20 00:32:52 crc kubenswrapper[5107]: I0220 00:32:52.487394 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8n8wf\" (UniqueName: \"kubernetes.io/projected/24a2ad6f-4207-484e-b1a8-6b1c947e7872-kube-api-access-8n8wf\") on node \"crc\" DevicePath \"\"" Feb 20 00:32:52 crc kubenswrapper[5107]: I0220 00:32:52.921351 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65asncmb" Feb 20 00:32:52 crc kubenswrapper[5107]: I0220 00:32:52.921395 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65asncmb" event={"ID":"24a2ad6f-4207-484e-b1a8-6b1c947e7872","Type":"ContainerDied","Data":"a4a5d7c4892090bc869fb814e3f6d4707728537174b506ca95f8fd1abd4e5597"} Feb 20 00:32:52 crc kubenswrapper[5107]: I0220 00:32:52.922207 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4a5d7c4892090bc869fb814e3f6d4707728537174b506ca95f8fd1abd4e5597" Feb 20 00:32:58 crc kubenswrapper[5107]: I0220 00:32:58.112545 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-5cd758d596-cmlk9"] Feb 20 00:32:58 crc kubenswrapper[5107]: I0220 00:32:58.113430 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="24a2ad6f-4207-484e-b1a8-6b1c947e7872" containerName="pull" Feb 20 00:32:58 crc kubenswrapper[5107]: I0220 00:32:58.113444 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="24a2ad6f-4207-484e-b1a8-6b1c947e7872" containerName="pull" Feb 20 00:32:58 crc kubenswrapper[5107]: I0220 00:32:58.113461 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f0388007-94ea-4ad7-b67f-b00bd73d9520" containerName="util" Feb 20 00:32:58 crc kubenswrapper[5107]: I0220 00:32:58.113466 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0388007-94ea-4ad7-b67f-b00bd73d9520" containerName="util" Feb 20 00:32:58 crc kubenswrapper[5107]: I0220 00:32:58.113483 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f0388007-94ea-4ad7-b67f-b00bd73d9520" containerName="extract" Feb 20 00:32:58 crc kubenswrapper[5107]: I0220 00:32:58.113490 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0388007-94ea-4ad7-b67f-b00bd73d9520" containerName="extract" Feb 20 00:32:58 crc kubenswrapper[5107]: I0220 00:32:58.113497 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="24a2ad6f-4207-484e-b1a8-6b1c947e7872" containerName="util" Feb 20 00:32:58 crc kubenswrapper[5107]: I0220 00:32:58.113502 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="24a2ad6f-4207-484e-b1a8-6b1c947e7872" containerName="util" Feb 20 00:32:58 crc kubenswrapper[5107]: I0220 00:32:58.113510 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="24a2ad6f-4207-484e-b1a8-6b1c947e7872" containerName="extract" Feb 20 00:32:58 crc kubenswrapper[5107]: I0220 00:32:58.113515 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="24a2ad6f-4207-484e-b1a8-6b1c947e7872" containerName="extract" Feb 20 00:32:58 crc kubenswrapper[5107]: I0220 00:32:58.113521 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f0388007-94ea-4ad7-b67f-b00bd73d9520" containerName="pull" Feb 20 00:32:58 crc kubenswrapper[5107]: I0220 00:32:58.113526 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0388007-94ea-4ad7-b67f-b00bd73d9520" containerName="pull" Feb 20 00:32:58 crc kubenswrapper[5107]: I0220 00:32:58.113618 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="f0388007-94ea-4ad7-b67f-b00bd73d9520" containerName="extract" Feb 20 00:32:58 crc kubenswrapper[5107]: I0220 00:32:58.113628 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="24a2ad6f-4207-484e-b1a8-6b1c947e7872" containerName="extract" Feb 20 00:32:58 crc kubenswrapper[5107]: I0220 00:32:58.120542 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5cd758d596-cmlk9" Feb 20 00:32:58 crc kubenswrapper[5107]: I0220 00:32:58.125846 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-dockercfg-dcgjd\"" Feb 20 00:32:58 crc kubenswrapper[5107]: I0220 00:32:58.133595 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-5cd758d596-cmlk9"] Feb 20 00:32:58 crc kubenswrapper[5107]: I0220 00:32:58.168977 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/c6fa5fd7-c794-4025-95a1-b237f8ee60c1-runner\") pod \"service-telemetry-operator-5cd758d596-cmlk9\" (UID: \"c6fa5fd7-c794-4025-95a1-b237f8ee60c1\") " pod="service-telemetry/service-telemetry-operator-5cd758d596-cmlk9" Feb 20 00:32:58 crc kubenswrapper[5107]: I0220 00:32:58.169036 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb7lg\" (UniqueName: \"kubernetes.io/projected/c6fa5fd7-c794-4025-95a1-b237f8ee60c1-kube-api-access-sb7lg\") pod \"service-telemetry-operator-5cd758d596-cmlk9\" (UID: \"c6fa5fd7-c794-4025-95a1-b237f8ee60c1\") " pod="service-telemetry/service-telemetry-operator-5cd758d596-cmlk9" Feb 20 00:32:58 crc kubenswrapper[5107]: I0220 00:32:58.269580 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/c6fa5fd7-c794-4025-95a1-b237f8ee60c1-runner\") pod \"service-telemetry-operator-5cd758d596-cmlk9\" (UID: \"c6fa5fd7-c794-4025-95a1-b237f8ee60c1\") " pod="service-telemetry/service-telemetry-operator-5cd758d596-cmlk9" Feb 20 00:32:58 crc kubenswrapper[5107]: I0220 00:32:58.269634 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sb7lg\" (UniqueName: \"kubernetes.io/projected/c6fa5fd7-c794-4025-95a1-b237f8ee60c1-kube-api-access-sb7lg\") pod \"service-telemetry-operator-5cd758d596-cmlk9\" (UID: \"c6fa5fd7-c794-4025-95a1-b237f8ee60c1\") " pod="service-telemetry/service-telemetry-operator-5cd758d596-cmlk9" Feb 20 00:32:58 crc kubenswrapper[5107]: I0220 00:32:58.270009 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/c6fa5fd7-c794-4025-95a1-b237f8ee60c1-runner\") pod \"service-telemetry-operator-5cd758d596-cmlk9\" (UID: \"c6fa5fd7-c794-4025-95a1-b237f8ee60c1\") " pod="service-telemetry/service-telemetry-operator-5cd758d596-cmlk9" Feb 20 00:32:58 crc kubenswrapper[5107]: I0220 00:32:58.286786 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb7lg\" (UniqueName: \"kubernetes.io/projected/c6fa5fd7-c794-4025-95a1-b237f8ee60c1-kube-api-access-sb7lg\") pod \"service-telemetry-operator-5cd758d596-cmlk9\" (UID: \"c6fa5fd7-c794-4025-95a1-b237f8ee60c1\") " pod="service-telemetry/service-telemetry-operator-5cd758d596-cmlk9" Feb 20 00:32:58 crc kubenswrapper[5107]: I0220 00:32:58.439339 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5cd758d596-cmlk9" Feb 20 00:32:58 crc kubenswrapper[5107]: I0220 00:32:58.952491 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-5cd758d596-cmlk9"] Feb 20 00:32:58 crc kubenswrapper[5107]: W0220 00:32:58.960479 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6fa5fd7_c794_4025_95a1_b237f8ee60c1.slice/crio-d4a5e414f1823574ecc43d0f806f67a8352ecc4350b345530e4c56f905d730d2 WatchSource:0}: Error finding container d4a5e414f1823574ecc43d0f806f67a8352ecc4350b345530e4c56f905d730d2: Status 404 returned error can't find the container with id d4a5e414f1823574ecc43d0f806f67a8352ecc4350b345530e4c56f905d730d2 Feb 20 00:32:58 crc kubenswrapper[5107]: I0220 00:32:58.990732 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5cd758d596-cmlk9" event={"ID":"c6fa5fd7-c794-4025-95a1-b237f8ee60c1","Type":"ContainerStarted","Data":"d4a5e414f1823574ecc43d0f806f67a8352ecc4350b345530e4c56f905d730d2"} Feb 20 00:32:59 crc kubenswrapper[5107]: I0220 00:32:59.278495 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-6fcbfcd9fd-fhjxc"] Feb 20 00:32:59 crc kubenswrapper[5107]: I0220 00:32:59.284353 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-6fcbfcd9fd-fhjxc" Feb 20 00:32:59 crc kubenswrapper[5107]: I0220 00:32:59.286828 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-dockercfg-5lbjc\"" Feb 20 00:32:59 crc kubenswrapper[5107]: I0220 00:32:59.289490 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-6fcbfcd9fd-fhjxc"] Feb 20 00:32:59 crc kubenswrapper[5107]: I0220 00:32:59.382996 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rx4c\" (UniqueName: \"kubernetes.io/projected/bc8c9fb1-58d6-422d-ae81-e2a9cf72c811-kube-api-access-2rx4c\") pod \"smart-gateway-operator-6fcbfcd9fd-fhjxc\" (UID: \"bc8c9fb1-58d6-422d-ae81-e2a9cf72c811\") " pod="service-telemetry/smart-gateway-operator-6fcbfcd9fd-fhjxc" Feb 20 00:32:59 crc kubenswrapper[5107]: I0220 00:32:59.383240 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/bc8c9fb1-58d6-422d-ae81-e2a9cf72c811-runner\") pod \"smart-gateway-operator-6fcbfcd9fd-fhjxc\" (UID: \"bc8c9fb1-58d6-422d-ae81-e2a9cf72c811\") " pod="service-telemetry/smart-gateway-operator-6fcbfcd9fd-fhjxc" Feb 20 00:32:59 crc kubenswrapper[5107]: I0220 00:32:59.484952 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/bc8c9fb1-58d6-422d-ae81-e2a9cf72c811-runner\") pod \"smart-gateway-operator-6fcbfcd9fd-fhjxc\" (UID: \"bc8c9fb1-58d6-422d-ae81-e2a9cf72c811\") " pod="service-telemetry/smart-gateway-operator-6fcbfcd9fd-fhjxc" Feb 20 00:32:59 crc kubenswrapper[5107]: I0220 00:32:59.485077 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2rx4c\" (UniqueName: \"kubernetes.io/projected/bc8c9fb1-58d6-422d-ae81-e2a9cf72c811-kube-api-access-2rx4c\") pod \"smart-gateway-operator-6fcbfcd9fd-fhjxc\" (UID: \"bc8c9fb1-58d6-422d-ae81-e2a9cf72c811\") " pod="service-telemetry/smart-gateway-operator-6fcbfcd9fd-fhjxc" Feb 20 00:32:59 crc kubenswrapper[5107]: I0220 00:32:59.485938 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/bc8c9fb1-58d6-422d-ae81-e2a9cf72c811-runner\") pod \"smart-gateway-operator-6fcbfcd9fd-fhjxc\" (UID: \"bc8c9fb1-58d6-422d-ae81-e2a9cf72c811\") " pod="service-telemetry/smart-gateway-operator-6fcbfcd9fd-fhjxc" Feb 20 00:32:59 crc kubenswrapper[5107]: I0220 00:32:59.515210 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rx4c\" (UniqueName: \"kubernetes.io/projected/bc8c9fb1-58d6-422d-ae81-e2a9cf72c811-kube-api-access-2rx4c\") pod \"smart-gateway-operator-6fcbfcd9fd-fhjxc\" (UID: \"bc8c9fb1-58d6-422d-ae81-e2a9cf72c811\") " pod="service-telemetry/smart-gateway-operator-6fcbfcd9fd-fhjxc" Feb 20 00:32:59 crc kubenswrapper[5107]: I0220 00:32:59.616703 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-6fcbfcd9fd-fhjxc" Feb 20 00:33:00 crc kubenswrapper[5107]: I0220 00:33:00.050325 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-6fcbfcd9fd-fhjxc"] Feb 20 00:33:00 crc kubenswrapper[5107]: W0220 00:33:00.086319 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc8c9fb1_58d6_422d_ae81_e2a9cf72c811.slice/crio-8dfeca243bd3f2f56063d8c96cd5999533569441e0f6ff2019debec9fbcfd396 WatchSource:0}: Error finding container 8dfeca243bd3f2f56063d8c96cd5999533569441e0f6ff2019debec9fbcfd396: Status 404 returned error can't find the container with id 8dfeca243bd3f2f56063d8c96cd5999533569441e0f6ff2019debec9fbcfd396 Feb 20 00:33:01 crc kubenswrapper[5107]: I0220 00:33:01.022180 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-6fcbfcd9fd-fhjxc" event={"ID":"bc8c9fb1-58d6-422d-ae81-e2a9cf72c811","Type":"ContainerStarted","Data":"8dfeca243bd3f2f56063d8c96cd5999533569441e0f6ff2019debec9fbcfd396"} Feb 20 00:33:21 crc kubenswrapper[5107]: I0220 00:33:21.231800 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-6fcbfcd9fd-fhjxc" event={"ID":"bc8c9fb1-58d6-422d-ae81-e2a9cf72c811","Type":"ContainerStarted","Data":"59ac49f2f63a771d7362b53b45b9254c875c748951ad39ba2ee471a95b6f8ef4"} Feb 20 00:33:21 crc kubenswrapper[5107]: I0220 00:33:21.234067 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5cd758d596-cmlk9" event={"ID":"c6fa5fd7-c794-4025-95a1-b237f8ee60c1","Type":"ContainerStarted","Data":"2c40b8f9a300f58f734185fbf77446d05e92b036edbb4be4bb069764b22d37d2"} Feb 20 00:33:21 crc kubenswrapper[5107]: I0220 00:33:21.260409 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-6fcbfcd9fd-fhjxc" podStartSLOduration=1.8395307 podStartE2EDuration="22.260391996s" podCreationTimestamp="2026-02-20 00:32:59 +0000 UTC" firstStartedPulling="2026-02-20 00:33:00.089177291 +0000 UTC m=+1466.457834857" lastFinishedPulling="2026-02-20 00:33:20.510038547 +0000 UTC m=+1486.878696153" observedRunningTime="2026-02-20 00:33:21.256328902 +0000 UTC m=+1487.624986478" watchObservedRunningTime="2026-02-20 00:33:21.260391996 +0000 UTC m=+1487.629049562" Feb 20 00:33:21 crc kubenswrapper[5107]: I0220 00:33:21.283380 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-5cd758d596-cmlk9" podStartSLOduration=1.7328142789999998 podStartE2EDuration="23.283361602s" podCreationTimestamp="2026-02-20 00:32:58 +0000 UTC" firstStartedPulling="2026-02-20 00:32:58.961704976 +0000 UTC m=+1465.330362542" lastFinishedPulling="2026-02-20 00:33:20.512252259 +0000 UTC m=+1486.880909865" observedRunningTime="2026-02-20 00:33:21.278868766 +0000 UTC m=+1487.647526362" watchObservedRunningTime="2026-02-20 00:33:21.283361602 +0000 UTC m=+1487.652019168" Feb 20 00:33:32 crc kubenswrapper[5107]: I0220 00:33:32.825195 5107 patch_prober.go:28] interesting pod/machine-config-daemon-5bqkx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:33:32 crc kubenswrapper[5107]: I0220 00:33:32.825805 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:33:35 crc kubenswrapper[5107]: I0220 00:33:35.219470 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fnskd_c9d08e95-6328-4e97-aab4-4dd9913914cc/kube-multus/0.log" Feb 20 00:33:35 crc kubenswrapper[5107]: I0220 00:33:35.221541 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fnskd_c9d08e95-6328-4e97-aab4-4dd9913914cc/kube-multus/0.log" Feb 20 00:33:35 crc kubenswrapper[5107]: I0220 00:33:35.229500 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 20 00:33:35 crc kubenswrapper[5107]: I0220 00:33:35.231223 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 20 00:33:44 crc kubenswrapper[5107]: I0220 00:33:44.109307 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-89jwh"] Feb 20 00:33:44 crc kubenswrapper[5107]: I0220 00:33:44.225277 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-89jwh"] Feb 20 00:33:44 crc kubenswrapper[5107]: I0220 00:33:44.225423 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-89jwh" Feb 20 00:33:44 crc kubenswrapper[5107]: I0220 00:33:44.228449 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-users\"" Feb 20 00:33:44 crc kubenswrapper[5107]: I0220 00:33:44.228569 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-dockercfg-t8mm9\"" Feb 20 00:33:44 crc kubenswrapper[5107]: I0220 00:33:44.228590 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-ca\"" Feb 20 00:33:44 crc kubenswrapper[5107]: I0220 00:33:44.228604 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-credentials\"" Feb 20 00:33:44 crc kubenswrapper[5107]: I0220 00:33:44.228906 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-ca\"" Feb 20 00:33:44 crc kubenswrapper[5107]: I0220 00:33:44.233333 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-interconnect-sasl-config\"" Feb 20 00:33:44 crc kubenswrapper[5107]: I0220 00:33:44.233455 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-credentials\"" Feb 20 00:33:44 crc kubenswrapper[5107]: I0220 00:33:44.336299 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/548c0b78-a052-4a56-84de-08b3d97c2522-sasl-users\") pod \"default-interconnect-55bf8d5cb-89jwh\" (UID: \"548c0b78-a052-4a56-84de-08b3d97c2522\") " pod="service-telemetry/default-interconnect-55bf8d5cb-89jwh" Feb 20 00:33:44 crc kubenswrapper[5107]: I0220 00:33:44.336405 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/548c0b78-a052-4a56-84de-08b3d97c2522-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-89jwh\" (UID: \"548c0b78-a052-4a56-84de-08b3d97c2522\") " pod="service-telemetry/default-interconnect-55bf8d5cb-89jwh" Feb 20 00:33:44 crc kubenswrapper[5107]: I0220 00:33:44.336600 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/548c0b78-a052-4a56-84de-08b3d97c2522-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-89jwh\" (UID: \"548c0b78-a052-4a56-84de-08b3d97c2522\") " pod="service-telemetry/default-interconnect-55bf8d5cb-89jwh" Feb 20 00:33:44 crc kubenswrapper[5107]: I0220 00:33:44.336691 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/548c0b78-a052-4a56-84de-08b3d97c2522-sasl-config\") pod \"default-interconnect-55bf8d5cb-89jwh\" (UID: \"548c0b78-a052-4a56-84de-08b3d97c2522\") " pod="service-telemetry/default-interconnect-55bf8d5cb-89jwh" Feb 20 00:33:44 crc kubenswrapper[5107]: I0220 00:33:44.336771 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9lnm\" (UniqueName: \"kubernetes.io/projected/548c0b78-a052-4a56-84de-08b3d97c2522-kube-api-access-z9lnm\") pod \"default-interconnect-55bf8d5cb-89jwh\" (UID: \"548c0b78-a052-4a56-84de-08b3d97c2522\") " pod="service-telemetry/default-interconnect-55bf8d5cb-89jwh" Feb 20 00:33:44 crc kubenswrapper[5107]: I0220 00:33:44.336832 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/548c0b78-a052-4a56-84de-08b3d97c2522-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-89jwh\" (UID: \"548c0b78-a052-4a56-84de-08b3d97c2522\") " pod="service-telemetry/default-interconnect-55bf8d5cb-89jwh" Feb 20 00:33:44 crc kubenswrapper[5107]: I0220 00:33:44.337045 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/548c0b78-a052-4a56-84de-08b3d97c2522-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-89jwh\" (UID: \"548c0b78-a052-4a56-84de-08b3d97c2522\") " pod="service-telemetry/default-interconnect-55bf8d5cb-89jwh" Feb 20 00:33:44 crc kubenswrapper[5107]: I0220 00:33:44.438287 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/548c0b78-a052-4a56-84de-08b3d97c2522-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-89jwh\" (UID: \"548c0b78-a052-4a56-84de-08b3d97c2522\") " pod="service-telemetry/default-interconnect-55bf8d5cb-89jwh" Feb 20 00:33:44 crc kubenswrapper[5107]: I0220 00:33:44.438377 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/548c0b78-a052-4a56-84de-08b3d97c2522-sasl-users\") pod \"default-interconnect-55bf8d5cb-89jwh\" (UID: \"548c0b78-a052-4a56-84de-08b3d97c2522\") " pod="service-telemetry/default-interconnect-55bf8d5cb-89jwh" Feb 20 00:33:44 crc kubenswrapper[5107]: I0220 00:33:44.438415 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/548c0b78-a052-4a56-84de-08b3d97c2522-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-89jwh\" (UID: \"548c0b78-a052-4a56-84de-08b3d97c2522\") " pod="service-telemetry/default-interconnect-55bf8d5cb-89jwh" Feb 20 00:33:44 crc kubenswrapper[5107]: I0220 00:33:44.438443 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/548c0b78-a052-4a56-84de-08b3d97c2522-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-89jwh\" (UID: \"548c0b78-a052-4a56-84de-08b3d97c2522\") " pod="service-telemetry/default-interconnect-55bf8d5cb-89jwh" Feb 20 00:33:44 crc kubenswrapper[5107]: I0220 00:33:44.438462 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/548c0b78-a052-4a56-84de-08b3d97c2522-sasl-config\") pod \"default-interconnect-55bf8d5cb-89jwh\" (UID: \"548c0b78-a052-4a56-84de-08b3d97c2522\") " pod="service-telemetry/default-interconnect-55bf8d5cb-89jwh" Feb 20 00:33:44 crc kubenswrapper[5107]: I0220 00:33:44.438482 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z9lnm\" (UniqueName: \"kubernetes.io/projected/548c0b78-a052-4a56-84de-08b3d97c2522-kube-api-access-z9lnm\") pod \"default-interconnect-55bf8d5cb-89jwh\" (UID: \"548c0b78-a052-4a56-84de-08b3d97c2522\") " pod="service-telemetry/default-interconnect-55bf8d5cb-89jwh" Feb 20 00:33:44 crc kubenswrapper[5107]: I0220 00:33:44.438502 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/548c0b78-a052-4a56-84de-08b3d97c2522-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-89jwh\" (UID: \"548c0b78-a052-4a56-84de-08b3d97c2522\") " pod="service-telemetry/default-interconnect-55bf8d5cb-89jwh" Feb 20 00:33:44 crc kubenswrapper[5107]: I0220 00:33:44.440631 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/548c0b78-a052-4a56-84de-08b3d97c2522-sasl-config\") pod \"default-interconnect-55bf8d5cb-89jwh\" (UID: \"548c0b78-a052-4a56-84de-08b3d97c2522\") " pod="service-telemetry/default-interconnect-55bf8d5cb-89jwh" Feb 20 00:33:44 crc kubenswrapper[5107]: I0220 00:33:44.446730 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/548c0b78-a052-4a56-84de-08b3d97c2522-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-89jwh\" (UID: \"548c0b78-a052-4a56-84de-08b3d97c2522\") " pod="service-telemetry/default-interconnect-55bf8d5cb-89jwh" Feb 20 00:33:44 crc kubenswrapper[5107]: I0220 00:33:44.449540 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/548c0b78-a052-4a56-84de-08b3d97c2522-sasl-users\") pod \"default-interconnect-55bf8d5cb-89jwh\" (UID: \"548c0b78-a052-4a56-84de-08b3d97c2522\") " pod="service-telemetry/default-interconnect-55bf8d5cb-89jwh" Feb 20 00:33:44 crc kubenswrapper[5107]: I0220 00:33:44.457371 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/548c0b78-a052-4a56-84de-08b3d97c2522-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-89jwh\" (UID: \"548c0b78-a052-4a56-84de-08b3d97c2522\") " pod="service-telemetry/default-interconnect-55bf8d5cb-89jwh" Feb 20 00:33:44 crc kubenswrapper[5107]: I0220 00:33:44.458207 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/548c0b78-a052-4a56-84de-08b3d97c2522-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-89jwh\" (UID: \"548c0b78-a052-4a56-84de-08b3d97c2522\") " pod="service-telemetry/default-interconnect-55bf8d5cb-89jwh" Feb 20 00:33:44 crc kubenswrapper[5107]: I0220 00:33:44.461486 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/548c0b78-a052-4a56-84de-08b3d97c2522-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-89jwh\" (UID: \"548c0b78-a052-4a56-84de-08b3d97c2522\") " pod="service-telemetry/default-interconnect-55bf8d5cb-89jwh" Feb 20 00:33:44 crc kubenswrapper[5107]: I0220 00:33:44.462986 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9lnm\" (UniqueName: \"kubernetes.io/projected/548c0b78-a052-4a56-84de-08b3d97c2522-kube-api-access-z9lnm\") pod \"default-interconnect-55bf8d5cb-89jwh\" (UID: \"548c0b78-a052-4a56-84de-08b3d97c2522\") " pod="service-telemetry/default-interconnect-55bf8d5cb-89jwh" Feb 20 00:33:44 crc kubenswrapper[5107]: I0220 00:33:44.540115 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-89jwh" Feb 20 00:33:44 crc kubenswrapper[5107]: I0220 00:33:44.744375 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-89jwh"] Feb 20 00:33:45 crc kubenswrapper[5107]: I0220 00:33:45.460799 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-89jwh" event={"ID":"548c0b78-a052-4a56-84de-08b3d97c2522","Type":"ContainerStarted","Data":"6fe6fd7d945670d4e58eb605633469ee55162eea1190424b0dc1733b50a6e4d0"} Feb 20 00:33:50 crc kubenswrapper[5107]: I0220 00:33:50.537412 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-89jwh" event={"ID":"548c0b78-a052-4a56-84de-08b3d97c2522","Type":"ContainerStarted","Data":"9205aaa0c66444f083ac0b0bb8bdf4d3f2e0e5a7fb474f1bd4023064ccfbe13c"} Feb 20 00:33:50 crc kubenswrapper[5107]: I0220 00:33:50.566032 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-89jwh" podStartSLOduration=1.531101794 podStartE2EDuration="6.56600783s" podCreationTimestamp="2026-02-20 00:33:44 +0000 UTC" firstStartedPulling="2026-02-20 00:33:44.754906486 +0000 UTC m=+1511.123564042" lastFinishedPulling="2026-02-20 00:33:49.789812472 +0000 UTC m=+1516.158470078" observedRunningTime="2026-02-20 00:33:50.563747906 +0000 UTC m=+1516.932405582" watchObservedRunningTime="2026-02-20 00:33:50.56600783 +0000 UTC m=+1516.934665426" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.678557 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.687132 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.690926 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-2\"" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.691227 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"serving-certs-ca-bundle\"" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.691550 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-prometheus-proxy-tls\"" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.691710 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-stf-dockercfg-kmbxr\"" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.691883 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-web-config\"" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.692122 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-tls-assets-0\"" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.692742 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default\"" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.692945 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-0\"" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.696378 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.696392 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-session-secret\"" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.697274 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-1\"" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.804743 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/df14b034-8115-4790-870b-81499599ef18-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"df14b034-8115-4790-870b-81499599ef18\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.804817 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/df14b034-8115-4790-870b-81499599ef18-config\") pod \"prometheus-default-0\" (UID: \"df14b034-8115-4790-870b-81499599ef18\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.804858 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/df14b034-8115-4790-870b-81499599ef18-tls-assets\") pod \"prometheus-default-0\" (UID: \"df14b034-8115-4790-870b-81499599ef18\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.804899 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/df14b034-8115-4790-870b-81499599ef18-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"df14b034-8115-4790-870b-81499599ef18\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.804956 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df14b034-8115-4790-870b-81499599ef18-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"df14b034-8115-4790-870b-81499599ef18\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.804981 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/df14b034-8115-4790-870b-81499599ef18-config-out\") pod \"prometheus-default-0\" (UID: \"df14b034-8115-4790-870b-81499599ef18\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.805005 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/df14b034-8115-4790-870b-81499599ef18-web-config\") pod \"prometheus-default-0\" (UID: \"df14b034-8115-4790-870b-81499599ef18\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.805026 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzz75\" (UniqueName: \"kubernetes.io/projected/df14b034-8115-4790-870b-81499599ef18-kube-api-access-gzz75\") pod \"prometheus-default-0\" (UID: \"df14b034-8115-4790-870b-81499599ef18\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.805054 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6e65351b-2f39-4244-be87-6dcced19c453\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6e65351b-2f39-4244-be87-6dcced19c453\") pod \"prometheus-default-0\" (UID: \"df14b034-8115-4790-870b-81499599ef18\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.805094 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/df14b034-8115-4790-870b-81499599ef18-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"df14b034-8115-4790-870b-81499599ef18\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.805132 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/df14b034-8115-4790-870b-81499599ef18-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"df14b034-8115-4790-870b-81499599ef18\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.805181 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/df14b034-8115-4790-870b-81499599ef18-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"df14b034-8115-4790-870b-81499599ef18\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.906711 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/df14b034-8115-4790-870b-81499599ef18-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"df14b034-8115-4790-870b-81499599ef18\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.906798 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/df14b034-8115-4790-870b-81499599ef18-config\") pod \"prometheus-default-0\" (UID: \"df14b034-8115-4790-870b-81499599ef18\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.906849 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/df14b034-8115-4790-870b-81499599ef18-tls-assets\") pod \"prometheus-default-0\" (UID: \"df14b034-8115-4790-870b-81499599ef18\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.906902 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/df14b034-8115-4790-870b-81499599ef18-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"df14b034-8115-4790-870b-81499599ef18\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.906953 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df14b034-8115-4790-870b-81499599ef18-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"df14b034-8115-4790-870b-81499599ef18\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.906983 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/df14b034-8115-4790-870b-81499599ef18-config-out\") pod \"prometheus-default-0\" (UID: \"df14b034-8115-4790-870b-81499599ef18\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.907014 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/df14b034-8115-4790-870b-81499599ef18-web-config\") pod \"prometheus-default-0\" (UID: \"df14b034-8115-4790-870b-81499599ef18\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.907041 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gzz75\" (UniqueName: \"kubernetes.io/projected/df14b034-8115-4790-870b-81499599ef18-kube-api-access-gzz75\") pod \"prometheus-default-0\" (UID: \"df14b034-8115-4790-870b-81499599ef18\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.907090 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-6e65351b-2f39-4244-be87-6dcced19c453\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6e65351b-2f39-4244-be87-6dcced19c453\") pod \"prometheus-default-0\" (UID: \"df14b034-8115-4790-870b-81499599ef18\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.907162 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/df14b034-8115-4790-870b-81499599ef18-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"df14b034-8115-4790-870b-81499599ef18\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.907230 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/df14b034-8115-4790-870b-81499599ef18-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"df14b034-8115-4790-870b-81499599ef18\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.907282 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/df14b034-8115-4790-870b-81499599ef18-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"df14b034-8115-4790-870b-81499599ef18\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.908228 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/df14b034-8115-4790-870b-81499599ef18-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"df14b034-8115-4790-870b-81499599ef18\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.908245 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/df14b034-8115-4790-870b-81499599ef18-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"df14b034-8115-4790-870b-81499599ef18\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:33:54 crc kubenswrapper[5107]: E0220 00:33:54.908407 5107 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Feb 20 00:33:54 crc kubenswrapper[5107]: E0220 00:33:54.908508 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df14b034-8115-4790-870b-81499599ef18-secret-default-prometheus-proxy-tls podName:df14b034-8115-4790-870b-81499599ef18 nodeName:}" failed. No retries permitted until 2026-02-20 00:33:55.408476695 +0000 UTC m=+1521.777134301 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/df14b034-8115-4790-870b-81499599ef18-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "df14b034-8115-4790-870b-81499599ef18") : secret "default-prometheus-proxy-tls" not found Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.909138 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df14b034-8115-4790-870b-81499599ef18-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"df14b034-8115-4790-870b-81499599ef18\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.911046 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/df14b034-8115-4790-870b-81499599ef18-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"df14b034-8115-4790-870b-81499599ef18\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.915060 5107 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.915120 5107 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-6e65351b-2f39-4244-be87-6dcced19c453\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6e65351b-2f39-4244-be87-6dcced19c453\") pod \"prometheus-default-0\" (UID: \"df14b034-8115-4790-870b-81499599ef18\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1ea2c716c4ef7929fc8724f0bbdccb5d7d2ed8d8a833eeaf29b2b1d1c4f28f32/globalmount\"" pod="service-telemetry/prometheus-default-0" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.922685 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/df14b034-8115-4790-870b-81499599ef18-config-out\") pod \"prometheus-default-0\" (UID: \"df14b034-8115-4790-870b-81499599ef18\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.923429 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/df14b034-8115-4790-870b-81499599ef18-config\") pod \"prometheus-default-0\" (UID: \"df14b034-8115-4790-870b-81499599ef18\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.923459 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/df14b034-8115-4790-870b-81499599ef18-tls-assets\") pod \"prometheus-default-0\" (UID: \"df14b034-8115-4790-870b-81499599ef18\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.923479 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/df14b034-8115-4790-870b-81499599ef18-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"df14b034-8115-4790-870b-81499599ef18\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.924094 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/df14b034-8115-4790-870b-81499599ef18-web-config\") pod \"prometheus-default-0\" (UID: \"df14b034-8115-4790-870b-81499599ef18\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.939049 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzz75\" (UniqueName: \"kubernetes.io/projected/df14b034-8115-4790-870b-81499599ef18-kube-api-access-gzz75\") pod \"prometheus-default-0\" (UID: \"df14b034-8115-4790-870b-81499599ef18\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:33:54 crc kubenswrapper[5107]: I0220 00:33:54.956595 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-6e65351b-2f39-4244-be87-6dcced19c453\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6e65351b-2f39-4244-be87-6dcced19c453\") pod \"prometheus-default-0\" (UID: \"df14b034-8115-4790-870b-81499599ef18\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:33:55 crc kubenswrapper[5107]: I0220 00:33:55.414526 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/df14b034-8115-4790-870b-81499599ef18-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"df14b034-8115-4790-870b-81499599ef18\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:33:55 crc kubenswrapper[5107]: E0220 00:33:55.414943 5107 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Feb 20 00:33:55 crc kubenswrapper[5107]: E0220 00:33:55.415169 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df14b034-8115-4790-870b-81499599ef18-secret-default-prometheus-proxy-tls podName:df14b034-8115-4790-870b-81499599ef18 nodeName:}" failed. No retries permitted until 2026-02-20 00:33:56.415121592 +0000 UTC m=+1522.783779168 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/df14b034-8115-4790-870b-81499599ef18-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "df14b034-8115-4790-870b-81499599ef18") : secret "default-prometheus-proxy-tls" not found Feb 20 00:33:56 crc kubenswrapper[5107]: I0220 00:33:56.435177 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/df14b034-8115-4790-870b-81499599ef18-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"df14b034-8115-4790-870b-81499599ef18\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:33:56 crc kubenswrapper[5107]: I0220 00:33:56.441649 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/df14b034-8115-4790-870b-81499599ef18-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"df14b034-8115-4790-870b-81499599ef18\") " pod="service-telemetry/prometheus-default-0" Feb 20 00:33:56 crc kubenswrapper[5107]: I0220 00:33:56.509688 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Feb 20 00:33:56 crc kubenswrapper[5107]: I0220 00:33:56.968292 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 20 00:33:57 crc kubenswrapper[5107]: I0220 00:33:57.828826 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"df14b034-8115-4790-870b-81499599ef18","Type":"ContainerStarted","Data":"de49d243adbb89ec984ab99a2b5be123f46773af2bafc442d5afff8b87acd876"} Feb 20 00:34:00 crc kubenswrapper[5107]: I0220 00:34:00.129938 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29525794-gf9xm"] Feb 20 00:34:00 crc kubenswrapper[5107]: I0220 00:34:00.136170 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525794-gf9xm" Feb 20 00:34:00 crc kubenswrapper[5107]: I0220 00:34:00.138753 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 20 00:34:00 crc kubenswrapper[5107]: I0220 00:34:00.138884 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-km7dp\"" Feb 20 00:34:00 crc kubenswrapper[5107]: I0220 00:34:00.139803 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 20 00:34:00 crc kubenswrapper[5107]: I0220 00:34:00.142207 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525794-gf9xm"] Feb 20 00:34:00 crc kubenswrapper[5107]: I0220 00:34:00.266754 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljz8c\" (UniqueName: \"kubernetes.io/projected/54deb717-c484-4ab7-89df-46458a4846ec-kube-api-access-ljz8c\") pod \"auto-csr-approver-29525794-gf9xm\" (UID: \"54deb717-c484-4ab7-89df-46458a4846ec\") " pod="openshift-infra/auto-csr-approver-29525794-gf9xm" Feb 20 00:34:00 crc kubenswrapper[5107]: I0220 00:34:00.367519 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ljz8c\" (UniqueName: \"kubernetes.io/projected/54deb717-c484-4ab7-89df-46458a4846ec-kube-api-access-ljz8c\") pod \"auto-csr-approver-29525794-gf9xm\" (UID: \"54deb717-c484-4ab7-89df-46458a4846ec\") " pod="openshift-infra/auto-csr-approver-29525794-gf9xm" Feb 20 00:34:00 crc kubenswrapper[5107]: I0220 00:34:00.387068 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljz8c\" (UniqueName: \"kubernetes.io/projected/54deb717-c484-4ab7-89df-46458a4846ec-kube-api-access-ljz8c\") pod \"auto-csr-approver-29525794-gf9xm\" (UID: \"54deb717-c484-4ab7-89df-46458a4846ec\") " pod="openshift-infra/auto-csr-approver-29525794-gf9xm" Feb 20 00:34:00 crc kubenswrapper[5107]: I0220 00:34:00.578867 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525794-gf9xm" Feb 20 00:34:01 crc kubenswrapper[5107]: W0220 00:34:01.251714 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54deb717_c484_4ab7_89df_46458a4846ec.slice/crio-d2a9c3a349f5f9df953e723520cf6cac5e9b6e1d6166f06f47115cd705f8748c WatchSource:0}: Error finding container d2a9c3a349f5f9df953e723520cf6cac5e9b6e1d6166f06f47115cd705f8748c: Status 404 returned error can't find the container with id d2a9c3a349f5f9df953e723520cf6cac5e9b6e1d6166f06f47115cd705f8748c Feb 20 00:34:01 crc kubenswrapper[5107]: I0220 00:34:01.263482 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525794-gf9xm"] Feb 20 00:34:01 crc kubenswrapper[5107]: I0220 00:34:01.886974 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525794-gf9xm" event={"ID":"54deb717-c484-4ab7-89df-46458a4846ec","Type":"ContainerStarted","Data":"d2a9c3a349f5f9df953e723520cf6cac5e9b6e1d6166f06f47115cd705f8748c"} Feb 20 00:34:02 crc kubenswrapper[5107]: I0220 00:34:02.824717 5107 patch_prober.go:28] interesting pod/machine-config-daemon-5bqkx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:34:02 crc kubenswrapper[5107]: I0220 00:34:02.825120 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:34:02 crc kubenswrapper[5107]: I0220 00:34:02.896329 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525794-gf9xm" event={"ID":"54deb717-c484-4ab7-89df-46458a4846ec","Type":"ContainerStarted","Data":"6fb328d4f4d1ca4634fd556c1eaefaed1bd8da9521f8a3641fa8eba8fcd7e07b"} Feb 20 00:34:02 crc kubenswrapper[5107]: I0220 00:34:02.898953 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"df14b034-8115-4790-870b-81499599ef18","Type":"ContainerStarted","Data":"6e52b8d99fdbfa3db418bb3bdefd14fc762d985ec9319d1f129114012136e7c8"} Feb 20 00:34:02 crc kubenswrapper[5107]: I0220 00:34:02.920131 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29525794-gf9xm" podStartSLOduration=1.912551957 podStartE2EDuration="2.9201113s" podCreationTimestamp="2026-02-20 00:34:00 +0000 UTC" firstStartedPulling="2026-02-20 00:34:01.253983657 +0000 UTC m=+1527.622641223" lastFinishedPulling="2026-02-20 00:34:02.26154296 +0000 UTC m=+1528.630200566" observedRunningTime="2026-02-20 00:34:02.912170837 +0000 UTC m=+1529.280828433" watchObservedRunningTime="2026-02-20 00:34:02.9201113 +0000 UTC m=+1529.288768876" Feb 20 00:34:03 crc kubenswrapper[5107]: I0220 00:34:03.912701 5107 generic.go:358] "Generic (PLEG): container finished" podID="54deb717-c484-4ab7-89df-46458a4846ec" containerID="6fb328d4f4d1ca4634fd556c1eaefaed1bd8da9521f8a3641fa8eba8fcd7e07b" exitCode=0 Feb 20 00:34:03 crc kubenswrapper[5107]: I0220 00:34:03.912768 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525794-gf9xm" event={"ID":"54deb717-c484-4ab7-89df-46458a4846ec","Type":"ContainerDied","Data":"6fb328d4f4d1ca4634fd556c1eaefaed1bd8da9521f8a3641fa8eba8fcd7e07b"} Feb 20 00:34:04 crc kubenswrapper[5107]: I0220 00:34:04.429393 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-694dc457d5-ppgbd"] Feb 20 00:34:04 crc kubenswrapper[5107]: I0220 00:34:04.436348 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-694dc457d5-ppgbd" Feb 20 00:34:04 crc kubenswrapper[5107]: I0220 00:34:04.441624 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-694dc457d5-ppgbd"] Feb 20 00:34:04 crc kubenswrapper[5107]: I0220 00:34:04.530822 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc968\" (UniqueName: \"kubernetes.io/projected/60edd466-2f48-492e-9c1b-a58f4a8882a4-kube-api-access-jc968\") pod \"default-snmp-webhook-694dc457d5-ppgbd\" (UID: \"60edd466-2f48-492e-9c1b-a58f4a8882a4\") " pod="service-telemetry/default-snmp-webhook-694dc457d5-ppgbd" Feb 20 00:34:04 crc kubenswrapper[5107]: I0220 00:34:04.636015 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jc968\" (UniqueName: \"kubernetes.io/projected/60edd466-2f48-492e-9c1b-a58f4a8882a4-kube-api-access-jc968\") pod \"default-snmp-webhook-694dc457d5-ppgbd\" (UID: \"60edd466-2f48-492e-9c1b-a58f4a8882a4\") " pod="service-telemetry/default-snmp-webhook-694dc457d5-ppgbd" Feb 20 00:34:04 crc kubenswrapper[5107]: I0220 00:34:04.666697 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc968\" (UniqueName: \"kubernetes.io/projected/60edd466-2f48-492e-9c1b-a58f4a8882a4-kube-api-access-jc968\") pod \"default-snmp-webhook-694dc457d5-ppgbd\" (UID: \"60edd466-2f48-492e-9c1b-a58f4a8882a4\") " pod="service-telemetry/default-snmp-webhook-694dc457d5-ppgbd" Feb 20 00:34:04 crc kubenswrapper[5107]: I0220 00:34:04.764033 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-694dc457d5-ppgbd" Feb 20 00:34:05 crc kubenswrapper[5107]: I0220 00:34:05.026880 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-694dc457d5-ppgbd"] Feb 20 00:34:05 crc kubenswrapper[5107]: I0220 00:34:05.195184 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525794-gf9xm" Feb 20 00:34:05 crc kubenswrapper[5107]: I0220 00:34:05.245878 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljz8c\" (UniqueName: \"kubernetes.io/projected/54deb717-c484-4ab7-89df-46458a4846ec-kube-api-access-ljz8c\") pod \"54deb717-c484-4ab7-89df-46458a4846ec\" (UID: \"54deb717-c484-4ab7-89df-46458a4846ec\") " Feb 20 00:34:05 crc kubenswrapper[5107]: I0220 00:34:05.251496 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54deb717-c484-4ab7-89df-46458a4846ec-kube-api-access-ljz8c" (OuterVolumeSpecName: "kube-api-access-ljz8c") pod "54deb717-c484-4ab7-89df-46458a4846ec" (UID: "54deb717-c484-4ab7-89df-46458a4846ec"). InnerVolumeSpecName "kube-api-access-ljz8c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:34:05 crc kubenswrapper[5107]: I0220 00:34:05.346921 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ljz8c\" (UniqueName: \"kubernetes.io/projected/54deb717-c484-4ab7-89df-46458a4846ec-kube-api-access-ljz8c\") on node \"crc\" DevicePath \"\"" Feb 20 00:34:05 crc kubenswrapper[5107]: I0220 00:34:05.941178 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525794-gf9xm" Feb 20 00:34:05 crc kubenswrapper[5107]: I0220 00:34:05.941177 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525794-gf9xm" event={"ID":"54deb717-c484-4ab7-89df-46458a4846ec","Type":"ContainerDied","Data":"d2a9c3a349f5f9df953e723520cf6cac5e9b6e1d6166f06f47115cd705f8748c"} Feb 20 00:34:05 crc kubenswrapper[5107]: I0220 00:34:05.941583 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2a9c3a349f5f9df953e723520cf6cac5e9b6e1d6166f06f47115cd705f8748c" Feb 20 00:34:05 crc kubenswrapper[5107]: I0220 00:34:05.947952 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-694dc457d5-ppgbd" event={"ID":"60edd466-2f48-492e-9c1b-a58f4a8882a4","Type":"ContainerStarted","Data":"ba8f76836ca57cd29263256956797877c7fe4d9554568a39577dbe193a0ca79d"} Feb 20 00:34:05 crc kubenswrapper[5107]: I0220 00:34:05.980918 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29525788-7xvgj"] Feb 20 00:34:05 crc kubenswrapper[5107]: I0220 00:34:05.985820 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29525788-7xvgj"] Feb 20 00:34:06 crc kubenswrapper[5107]: I0220 00:34:06.501006 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="863755a4-5fa7-4005-9d6b-aa969fe9e5a6" path="/var/lib/kubelet/pods/863755a4-5fa7-4005-9d6b-aa969fe9e5a6/volumes" Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.032562 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.033554 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="54deb717-c484-4ab7-89df-46458a4846ec" containerName="oc" Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.033578 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="54deb717-c484-4ab7-89df-46458a4846ec" containerName="oc" Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.033742 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="54deb717-c484-4ab7-89df-46458a4846ec" containerName="oc" Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.043920 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.046268 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-web-config\"" Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.046572 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-alertmanager-proxy-tls\"" Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.048689 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-cluster-tls-config\"" Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.048976 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-tls-assets-0\"" Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.049135 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-stf-dockercfg-g6zj4\"" Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.049343 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-generated\"" Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.056199 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.192230 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4ce5a4a3-1668-4973-a16a-6401a9a4b472-web-config\") pod \"alertmanager-default-0\" (UID: \"4ce5a4a3-1668-4973-a16a-6401a9a4b472\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.192687 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/4ce5a4a3-1668-4973-a16a-6401a9a4b472-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"4ce5a4a3-1668-4973-a16a-6401a9a4b472\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.192767 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/4ce5a4a3-1668-4973-a16a-6401a9a4b472-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"4ce5a4a3-1668-4973-a16a-6401a9a4b472\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.192850 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/4ce5a4a3-1668-4973-a16a-6401a9a4b472-config-volume\") pod \"alertmanager-default-0\" (UID: \"4ce5a4a3-1668-4973-a16a-6401a9a4b472\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.192874 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lfls\" (UniqueName: \"kubernetes.io/projected/4ce5a4a3-1668-4973-a16a-6401a9a4b472-kube-api-access-4lfls\") pod \"alertmanager-default-0\" (UID: \"4ce5a4a3-1668-4973-a16a-6401a9a4b472\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.192943 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4a3f03a4-1999-41de-8cd4-f65d9998c55d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4a3f03a4-1999-41de-8cd4-f65d9998c55d\") pod \"alertmanager-default-0\" (UID: \"4ce5a4a3-1668-4973-a16a-6401a9a4b472\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.192985 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/4ce5a4a3-1668-4973-a16a-6401a9a4b472-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"4ce5a4a3-1668-4973-a16a-6401a9a4b472\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.193011 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4ce5a4a3-1668-4973-a16a-6401a9a4b472-tls-assets\") pod \"alertmanager-default-0\" (UID: \"4ce5a4a3-1668-4973-a16a-6401a9a4b472\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.193085 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4ce5a4a3-1668-4973-a16a-6401a9a4b472-config-out\") pod \"alertmanager-default-0\" (UID: \"4ce5a4a3-1668-4973-a16a-6401a9a4b472\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.294021 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/4ce5a4a3-1668-4973-a16a-6401a9a4b472-config-volume\") pod \"alertmanager-default-0\" (UID: \"4ce5a4a3-1668-4973-a16a-6401a9a4b472\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.294074 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4lfls\" (UniqueName: \"kubernetes.io/projected/4ce5a4a3-1668-4973-a16a-6401a9a4b472-kube-api-access-4lfls\") pod \"alertmanager-default-0\" (UID: \"4ce5a4a3-1668-4973-a16a-6401a9a4b472\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.294100 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-4a3f03a4-1999-41de-8cd4-f65d9998c55d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4a3f03a4-1999-41de-8cd4-f65d9998c55d\") pod \"alertmanager-default-0\" (UID: \"4ce5a4a3-1668-4973-a16a-6401a9a4b472\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.294131 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/4ce5a4a3-1668-4973-a16a-6401a9a4b472-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"4ce5a4a3-1668-4973-a16a-6401a9a4b472\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.294163 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4ce5a4a3-1668-4973-a16a-6401a9a4b472-tls-assets\") pod \"alertmanager-default-0\" (UID: \"4ce5a4a3-1668-4973-a16a-6401a9a4b472\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.294213 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4ce5a4a3-1668-4973-a16a-6401a9a4b472-config-out\") pod \"alertmanager-default-0\" (UID: \"4ce5a4a3-1668-4973-a16a-6401a9a4b472\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.294246 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4ce5a4a3-1668-4973-a16a-6401a9a4b472-web-config\") pod \"alertmanager-default-0\" (UID: \"4ce5a4a3-1668-4973-a16a-6401a9a4b472\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.294279 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/4ce5a4a3-1668-4973-a16a-6401a9a4b472-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"4ce5a4a3-1668-4973-a16a-6401a9a4b472\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.294303 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/4ce5a4a3-1668-4973-a16a-6401a9a4b472-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"4ce5a4a3-1668-4973-a16a-6401a9a4b472\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:34:08 crc kubenswrapper[5107]: E0220 00:34:08.295223 5107 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 20 00:34:08 crc kubenswrapper[5107]: E0220 00:34:08.295304 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ce5a4a3-1668-4973-a16a-6401a9a4b472-secret-default-alertmanager-proxy-tls podName:4ce5a4a3-1668-4973-a16a-6401a9a4b472 nodeName:}" failed. No retries permitted until 2026-02-20 00:34:08.795285075 +0000 UTC m=+1535.163942641 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/4ce5a4a3-1668-4973-a16a-6401a9a4b472-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "4ce5a4a3-1668-4973-a16a-6401a9a4b472") : secret "default-alertmanager-proxy-tls" not found Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.301619 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4ce5a4a3-1668-4973-a16a-6401a9a4b472-tls-assets\") pod \"alertmanager-default-0\" (UID: \"4ce5a4a3-1668-4973-a16a-6401a9a4b472\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.301693 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4ce5a4a3-1668-4973-a16a-6401a9a4b472-web-config\") pod \"alertmanager-default-0\" (UID: \"4ce5a4a3-1668-4973-a16a-6401a9a4b472\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.302204 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4ce5a4a3-1668-4973-a16a-6401a9a4b472-config-out\") pod \"alertmanager-default-0\" (UID: \"4ce5a4a3-1668-4973-a16a-6401a9a4b472\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.302789 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/4ce5a4a3-1668-4973-a16a-6401a9a4b472-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"4ce5a4a3-1668-4973-a16a-6401a9a4b472\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.304816 5107 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.304845 5107 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-4a3f03a4-1999-41de-8cd4-f65d9998c55d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4a3f03a4-1999-41de-8cd4-f65d9998c55d\") pod \"alertmanager-default-0\" (UID: \"4ce5a4a3-1668-4973-a16a-6401a9a4b472\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8aa74a6aeb1688fb9b57104d4d36481c1b703ac22f891cf5d28c074e7620a81c/globalmount\"" pod="service-telemetry/alertmanager-default-0" Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.306693 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/4ce5a4a3-1668-4973-a16a-6401a9a4b472-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"4ce5a4a3-1668-4973-a16a-6401a9a4b472\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.307952 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/4ce5a4a3-1668-4973-a16a-6401a9a4b472-config-volume\") pod \"alertmanager-default-0\" (UID: \"4ce5a4a3-1668-4973-a16a-6401a9a4b472\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.310700 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lfls\" (UniqueName: \"kubernetes.io/projected/4ce5a4a3-1668-4973-a16a-6401a9a4b472-kube-api-access-4lfls\") pod \"alertmanager-default-0\" (UID: \"4ce5a4a3-1668-4973-a16a-6401a9a4b472\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.326488 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-4a3f03a4-1999-41de-8cd4-f65d9998c55d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4a3f03a4-1999-41de-8cd4-f65d9998c55d\") pod \"alertmanager-default-0\" (UID: \"4ce5a4a3-1668-4973-a16a-6401a9a4b472\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:34:08 crc kubenswrapper[5107]: I0220 00:34:08.814424 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/4ce5a4a3-1668-4973-a16a-6401a9a4b472-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"4ce5a4a3-1668-4973-a16a-6401a9a4b472\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:34:08 crc kubenswrapper[5107]: E0220 00:34:08.814630 5107 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 20 00:34:08 crc kubenswrapper[5107]: E0220 00:34:08.814727 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ce5a4a3-1668-4973-a16a-6401a9a4b472-secret-default-alertmanager-proxy-tls podName:4ce5a4a3-1668-4973-a16a-6401a9a4b472 nodeName:}" failed. No retries permitted until 2026-02-20 00:34:09.814708812 +0000 UTC m=+1536.183366378 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/4ce5a4a3-1668-4973-a16a-6401a9a4b472-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "4ce5a4a3-1668-4973-a16a-6401a9a4b472") : secret "default-alertmanager-proxy-tls" not found Feb 20 00:34:09 crc kubenswrapper[5107]: I0220 00:34:09.834647 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/4ce5a4a3-1668-4973-a16a-6401a9a4b472-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"4ce5a4a3-1668-4973-a16a-6401a9a4b472\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:34:09 crc kubenswrapper[5107]: E0220 00:34:09.834905 5107 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 20 00:34:09 crc kubenswrapper[5107]: E0220 00:34:09.835011 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ce5a4a3-1668-4973-a16a-6401a9a4b472-secret-default-alertmanager-proxy-tls podName:4ce5a4a3-1668-4973-a16a-6401a9a4b472 nodeName:}" failed. No retries permitted until 2026-02-20 00:34:11.834986113 +0000 UTC m=+1538.203643689 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/4ce5a4a3-1668-4973-a16a-6401a9a4b472-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "4ce5a4a3-1668-4973-a16a-6401a9a4b472") : secret "default-alertmanager-proxy-tls" not found Feb 20 00:34:10 crc kubenswrapper[5107]: I0220 00:34:10.008540 5107 generic.go:358] "Generic (PLEG): container finished" podID="df14b034-8115-4790-870b-81499599ef18" containerID="6e52b8d99fdbfa3db418bb3bdefd14fc762d985ec9319d1f129114012136e7c8" exitCode=0 Feb 20 00:34:10 crc kubenswrapper[5107]: I0220 00:34:10.008678 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"df14b034-8115-4790-870b-81499599ef18","Type":"ContainerDied","Data":"6e52b8d99fdbfa3db418bb3bdefd14fc762d985ec9319d1f129114012136e7c8"} Feb 20 00:34:11 crc kubenswrapper[5107]: I0220 00:34:11.876979 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/4ce5a4a3-1668-4973-a16a-6401a9a4b472-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"4ce5a4a3-1668-4973-a16a-6401a9a4b472\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:34:11 crc kubenswrapper[5107]: I0220 00:34:11.888908 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/4ce5a4a3-1668-4973-a16a-6401a9a4b472-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"4ce5a4a3-1668-4973-a16a-6401a9a4b472\") " pod="service-telemetry/alertmanager-default-0" Feb 20 00:34:12 crc kubenswrapper[5107]: I0220 00:34:12.011615 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Feb 20 00:34:13 crc kubenswrapper[5107]: I0220 00:34:13.771788 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 20 00:34:13 crc kubenswrapper[5107]: W0220 00:34:13.778338 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ce5a4a3_1668_4973_a16a_6401a9a4b472.slice/crio-a2aa09866d8748c07ab689f1f94198b45d9d1bab0b2aa996f7ebf4b9266325fe WatchSource:0}: Error finding container a2aa09866d8748c07ab689f1f94198b45d9d1bab0b2aa996f7ebf4b9266325fe: Status 404 returned error can't find the container with id a2aa09866d8748c07ab689f1f94198b45d9d1bab0b2aa996f7ebf4b9266325fe Feb 20 00:34:14 crc kubenswrapper[5107]: I0220 00:34:14.041717 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-694dc457d5-ppgbd" event={"ID":"60edd466-2f48-492e-9c1b-a58f4a8882a4","Type":"ContainerStarted","Data":"95e96710c98048f95a74c2d6376285a6921a7d2ce0865f112a9010fee82b8a19"} Feb 20 00:34:14 crc kubenswrapper[5107]: I0220 00:34:14.043123 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"4ce5a4a3-1668-4973-a16a-6401a9a4b472","Type":"ContainerStarted","Data":"a2aa09866d8748c07ab689f1f94198b45d9d1bab0b2aa996f7ebf4b9266325fe"} Feb 20 00:34:14 crc kubenswrapper[5107]: I0220 00:34:14.057811 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-694dc457d5-ppgbd" podStartSLOduration=1.6893552139999999 podStartE2EDuration="10.057796403s" podCreationTimestamp="2026-02-20 00:34:04 +0000 UTC" firstStartedPulling="2026-02-20 00:34:05.03419676 +0000 UTC m=+1531.402854326" lastFinishedPulling="2026-02-20 00:34:13.402637909 +0000 UTC m=+1539.771295515" observedRunningTime="2026-02-20 00:34:14.053521003 +0000 UTC m=+1540.422178569" watchObservedRunningTime="2026-02-20 00:34:14.057796403 +0000 UTC m=+1540.426453969" Feb 20 00:34:16 crc kubenswrapper[5107]: I0220 00:34:16.066988 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"4ce5a4a3-1668-4973-a16a-6401a9a4b472","Type":"ContainerStarted","Data":"30448540bf748c4e562ce733cbb9b00ec0f4f942629467df2f7fd734bd5d2fd9"} Feb 20 00:34:16 crc kubenswrapper[5107]: I0220 00:34:16.588973 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tsv8l"] Feb 20 00:34:16 crc kubenswrapper[5107]: I0220 00:34:16.597013 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tsv8l" Feb 20 00:34:16 crc kubenswrapper[5107]: I0220 00:34:16.618558 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tsv8l"] Feb 20 00:34:16 crc kubenswrapper[5107]: I0220 00:34:16.664850 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3625c616-e793-4cdd-ba35-67b1401611a5-catalog-content\") pod \"redhat-operators-tsv8l\" (UID: \"3625c616-e793-4cdd-ba35-67b1401611a5\") " pod="openshift-marketplace/redhat-operators-tsv8l" Feb 20 00:34:16 crc kubenswrapper[5107]: I0220 00:34:16.664905 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3625c616-e793-4cdd-ba35-67b1401611a5-utilities\") pod \"redhat-operators-tsv8l\" (UID: \"3625c616-e793-4cdd-ba35-67b1401611a5\") " pod="openshift-marketplace/redhat-operators-tsv8l" Feb 20 00:34:16 crc kubenswrapper[5107]: I0220 00:34:16.664965 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdrq5\" (UniqueName: \"kubernetes.io/projected/3625c616-e793-4cdd-ba35-67b1401611a5-kube-api-access-kdrq5\") pod \"redhat-operators-tsv8l\" (UID: \"3625c616-e793-4cdd-ba35-67b1401611a5\") " pod="openshift-marketplace/redhat-operators-tsv8l" Feb 20 00:34:16 crc kubenswrapper[5107]: I0220 00:34:16.766505 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3625c616-e793-4cdd-ba35-67b1401611a5-catalog-content\") pod \"redhat-operators-tsv8l\" (UID: \"3625c616-e793-4cdd-ba35-67b1401611a5\") " pod="openshift-marketplace/redhat-operators-tsv8l" Feb 20 00:34:16 crc kubenswrapper[5107]: I0220 00:34:16.766553 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3625c616-e793-4cdd-ba35-67b1401611a5-utilities\") pod \"redhat-operators-tsv8l\" (UID: \"3625c616-e793-4cdd-ba35-67b1401611a5\") " pod="openshift-marketplace/redhat-operators-tsv8l" Feb 20 00:34:16 crc kubenswrapper[5107]: I0220 00:34:16.766606 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kdrq5\" (UniqueName: \"kubernetes.io/projected/3625c616-e793-4cdd-ba35-67b1401611a5-kube-api-access-kdrq5\") pod \"redhat-operators-tsv8l\" (UID: \"3625c616-e793-4cdd-ba35-67b1401611a5\") " pod="openshift-marketplace/redhat-operators-tsv8l" Feb 20 00:34:16 crc kubenswrapper[5107]: I0220 00:34:16.767285 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3625c616-e793-4cdd-ba35-67b1401611a5-catalog-content\") pod \"redhat-operators-tsv8l\" (UID: \"3625c616-e793-4cdd-ba35-67b1401611a5\") " pod="openshift-marketplace/redhat-operators-tsv8l" Feb 20 00:34:16 crc kubenswrapper[5107]: I0220 00:34:16.767352 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3625c616-e793-4cdd-ba35-67b1401611a5-utilities\") pod \"redhat-operators-tsv8l\" (UID: \"3625c616-e793-4cdd-ba35-67b1401611a5\") " pod="openshift-marketplace/redhat-operators-tsv8l" Feb 20 00:34:16 crc kubenswrapper[5107]: I0220 00:34:16.794459 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdrq5\" (UniqueName: \"kubernetes.io/projected/3625c616-e793-4cdd-ba35-67b1401611a5-kube-api-access-kdrq5\") pod \"redhat-operators-tsv8l\" (UID: \"3625c616-e793-4cdd-ba35-67b1401611a5\") " pod="openshift-marketplace/redhat-operators-tsv8l" Feb 20 00:34:16 crc kubenswrapper[5107]: I0220 00:34:16.917220 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tsv8l" Feb 20 00:34:17 crc kubenswrapper[5107]: I0220 00:34:17.960615 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tsv8l"] Feb 20 00:34:18 crc kubenswrapper[5107]: I0220 00:34:18.086464 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsv8l" event={"ID":"3625c616-e793-4cdd-ba35-67b1401611a5","Type":"ContainerStarted","Data":"3bb34dc0b65f2686c16e24cb25947ff546a8963192b1649a0c1fa5f0dbc84c4f"} Feb 20 00:34:18 crc kubenswrapper[5107]: I0220 00:34:18.088130 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"df14b034-8115-4790-870b-81499599ef18","Type":"ContainerStarted","Data":"8f51fc1f888a89d2ebd5c97b061102a85750a3c86224c3414033068a846f9735"} Feb 20 00:34:19 crc kubenswrapper[5107]: I0220 00:34:19.095309 5107 generic.go:358] "Generic (PLEG): container finished" podID="3625c616-e793-4cdd-ba35-67b1401611a5" containerID="2729106c6ca1692e8b089e65457c80c260255bf1023dcac38ed9e8f442dc08e9" exitCode=0 Feb 20 00:34:19 crc kubenswrapper[5107]: I0220 00:34:19.095646 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsv8l" event={"ID":"3625c616-e793-4cdd-ba35-67b1401611a5","Type":"ContainerDied","Data":"2729106c6ca1692e8b089e65457c80c260255bf1023dcac38ed9e8f442dc08e9"} Feb 20 00:34:21 crc kubenswrapper[5107]: I0220 00:34:21.164869 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2"] Feb 20 00:34:21 crc kubenswrapper[5107]: I0220 00:34:21.402038 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2"] Feb 20 00:34:21 crc kubenswrapper[5107]: I0220 00:34:21.402237 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2" Feb 20 00:34:21 crc kubenswrapper[5107]: I0220 00:34:21.404492 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-session-secret\"" Feb 20 00:34:21 crc kubenswrapper[5107]: I0220 00:34:21.404987 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-dockercfg-8frqs\"" Feb 20 00:34:21 crc kubenswrapper[5107]: I0220 00:34:21.405695 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-sg-core-configmap\"" Feb 20 00:34:21 crc kubenswrapper[5107]: I0220 00:34:21.406482 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-proxy-tls\"" Feb 20 00:34:21 crc kubenswrapper[5107]: I0220 00:34:21.429614 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/33411a6e-af33-4251-9cb8-b6585faf9f3d-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2\" (UID: \"33411a6e-af33-4251-9cb8-b6585faf9f3d\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2" Feb 20 00:34:21 crc kubenswrapper[5107]: I0220 00:34:21.429889 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6rbf\" (UniqueName: \"kubernetes.io/projected/33411a6e-af33-4251-9cb8-b6585faf9f3d-kube-api-access-q6rbf\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2\" (UID: \"33411a6e-af33-4251-9cb8-b6585faf9f3d\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2" Feb 20 00:34:21 crc kubenswrapper[5107]: I0220 00:34:21.430090 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/33411a6e-af33-4251-9cb8-b6585faf9f3d-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2\" (UID: \"33411a6e-af33-4251-9cb8-b6585faf9f3d\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2" Feb 20 00:34:21 crc kubenswrapper[5107]: I0220 00:34:21.430345 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/33411a6e-af33-4251-9cb8-b6585faf9f3d-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2\" (UID: \"33411a6e-af33-4251-9cb8-b6585faf9f3d\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2" Feb 20 00:34:21 crc kubenswrapper[5107]: I0220 00:34:21.430528 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/33411a6e-af33-4251-9cb8-b6585faf9f3d-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2\" (UID: \"33411a6e-af33-4251-9cb8-b6585faf9f3d\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2" Feb 20 00:34:21 crc kubenswrapper[5107]: I0220 00:34:21.531751 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q6rbf\" (UniqueName: \"kubernetes.io/projected/33411a6e-af33-4251-9cb8-b6585faf9f3d-kube-api-access-q6rbf\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2\" (UID: \"33411a6e-af33-4251-9cb8-b6585faf9f3d\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2" Feb 20 00:34:21 crc kubenswrapper[5107]: I0220 00:34:21.531810 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/33411a6e-af33-4251-9cb8-b6585faf9f3d-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2\" (UID: \"33411a6e-af33-4251-9cb8-b6585faf9f3d\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2" Feb 20 00:34:21 crc kubenswrapper[5107]: I0220 00:34:21.531851 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/33411a6e-af33-4251-9cb8-b6585faf9f3d-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2\" (UID: \"33411a6e-af33-4251-9cb8-b6585faf9f3d\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2" Feb 20 00:34:21 crc kubenswrapper[5107]: I0220 00:34:21.531881 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/33411a6e-af33-4251-9cb8-b6585faf9f3d-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2\" (UID: \"33411a6e-af33-4251-9cb8-b6585faf9f3d\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2" Feb 20 00:34:21 crc kubenswrapper[5107]: I0220 00:34:21.531948 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/33411a6e-af33-4251-9cb8-b6585faf9f3d-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2\" (UID: \"33411a6e-af33-4251-9cb8-b6585faf9f3d\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2" Feb 20 00:34:21 crc kubenswrapper[5107]: E0220 00:34:21.532605 5107 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Feb 20 00:34:21 crc kubenswrapper[5107]: I0220 00:34:21.532830 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/33411a6e-af33-4251-9cb8-b6585faf9f3d-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2\" (UID: \"33411a6e-af33-4251-9cb8-b6585faf9f3d\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2" Feb 20 00:34:21 crc kubenswrapper[5107]: I0220 00:34:21.532884 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/33411a6e-af33-4251-9cb8-b6585faf9f3d-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2\" (UID: \"33411a6e-af33-4251-9cb8-b6585faf9f3d\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2" Feb 20 00:34:21 crc kubenswrapper[5107]: E0220 00:34:21.533101 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33411a6e-af33-4251-9cb8-b6585faf9f3d-default-cloud1-coll-meter-proxy-tls podName:33411a6e-af33-4251-9cb8-b6585faf9f3d nodeName:}" failed. No retries permitted until 2026-02-20 00:34:22.032836299 +0000 UTC m=+1548.401493905 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/33411a6e-af33-4251-9cb8-b6585faf9f3d-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2" (UID: "33411a6e-af33-4251-9cb8-b6585faf9f3d") : secret "default-cloud1-coll-meter-proxy-tls" not found Feb 20 00:34:21 crc kubenswrapper[5107]: I0220 00:34:21.539960 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/33411a6e-af33-4251-9cb8-b6585faf9f3d-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2\" (UID: \"33411a6e-af33-4251-9cb8-b6585faf9f3d\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2" Feb 20 00:34:21 crc kubenswrapper[5107]: I0220 00:34:21.553636 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6rbf\" (UniqueName: \"kubernetes.io/projected/33411a6e-af33-4251-9cb8-b6585faf9f3d-kube-api-access-q6rbf\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2\" (UID: \"33411a6e-af33-4251-9cb8-b6585faf9f3d\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2" Feb 20 00:34:22 crc kubenswrapper[5107]: I0220 00:34:22.043729 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/33411a6e-af33-4251-9cb8-b6585faf9f3d-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2\" (UID: \"33411a6e-af33-4251-9cb8-b6585faf9f3d\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2" Feb 20 00:34:22 crc kubenswrapper[5107]: E0220 00:34:22.043911 5107 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Feb 20 00:34:22 crc kubenswrapper[5107]: E0220 00:34:22.044484 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33411a6e-af33-4251-9cb8-b6585faf9f3d-default-cloud1-coll-meter-proxy-tls podName:33411a6e-af33-4251-9cb8-b6585faf9f3d nodeName:}" failed. No retries permitted until 2026-02-20 00:34:23.044463765 +0000 UTC m=+1549.413121331 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/33411a6e-af33-4251-9cb8-b6585faf9f3d-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2" (UID: "33411a6e-af33-4251-9cb8-b6585faf9f3d") : secret "default-cloud1-coll-meter-proxy-tls" not found Feb 20 00:34:23 crc kubenswrapper[5107]: I0220 00:34:23.060964 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/33411a6e-af33-4251-9cb8-b6585faf9f3d-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2\" (UID: \"33411a6e-af33-4251-9cb8-b6585faf9f3d\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2" Feb 20 00:34:23 crc kubenswrapper[5107]: I0220 00:34:23.080458 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/33411a6e-af33-4251-9cb8-b6585faf9f3d-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2\" (UID: \"33411a6e-af33-4251-9cb8-b6585faf9f3d\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2" Feb 20 00:34:23 crc kubenswrapper[5107]: I0220 00:34:23.140897 5107 generic.go:358] "Generic (PLEG): container finished" podID="4ce5a4a3-1668-4973-a16a-6401a9a4b472" containerID="30448540bf748c4e562ce733cbb9b00ec0f4f942629467df2f7fd734bd5d2fd9" exitCode=0 Feb 20 00:34:23 crc kubenswrapper[5107]: I0220 00:34:23.140973 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"4ce5a4a3-1668-4973-a16a-6401a9a4b472","Type":"ContainerDied","Data":"30448540bf748c4e562ce733cbb9b00ec0f4f942629467df2f7fd734bd5d2fd9"} Feb 20 00:34:23 crc kubenswrapper[5107]: I0220 00:34:23.147741 5107 generic.go:358] "Generic (PLEG): container finished" podID="3625c616-e793-4cdd-ba35-67b1401611a5" containerID="305e16d2dcbd8b19bf4fd940c6bc8e31acb9dfa33744043251c0d93d1257d079" exitCode=0 Feb 20 00:34:23 crc kubenswrapper[5107]: I0220 00:34:23.147919 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsv8l" event={"ID":"3625c616-e793-4cdd-ba35-67b1401611a5","Type":"ContainerDied","Data":"305e16d2dcbd8b19bf4fd940c6bc8e31acb9dfa33744043251c0d93d1257d079"} Feb 20 00:34:23 crc kubenswrapper[5107]: I0220 00:34:23.155364 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"df14b034-8115-4790-870b-81499599ef18","Type":"ContainerStarted","Data":"420c10b636cee7bbd4c4cebbd0687f6ccf02ef9e7662e18c92f66ffaa10f7188"} Feb 20 00:34:23 crc kubenswrapper[5107]: I0220 00:34:23.225947 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2" Feb 20 00:34:23 crc kubenswrapper[5107]: I0220 00:34:23.700183 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2"] Feb 20 00:34:23 crc kubenswrapper[5107]: I0220 00:34:23.934490 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp"] Feb 20 00:34:25 crc kubenswrapper[5107]: I0220 00:34:25.015759 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2" event={"ID":"33411a6e-af33-4251-9cb8-b6585faf9f3d","Type":"ContainerStarted","Data":"4091d333b7691979fbf05baf3763a5326a4a86ad1a469e75800da7a61640d01f"} Feb 20 00:34:25 crc kubenswrapper[5107]: I0220 00:34:25.016194 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsv8l" event={"ID":"3625c616-e793-4cdd-ba35-67b1401611a5","Type":"ContainerStarted","Data":"20791129120f1a1fddfaf439a2353f4d4abd99bc26ab7d0a3856f53c00b50a12"} Feb 20 00:34:25 crc kubenswrapper[5107]: I0220 00:34:25.016714 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp" Feb 20 00:34:25 crc kubenswrapper[5107]: I0220 00:34:25.020670 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-proxy-tls\"" Feb 20 00:34:25 crc kubenswrapper[5107]: I0220 00:34:25.021887 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-sg-core-configmap\"" Feb 20 00:34:25 crc kubenswrapper[5107]: I0220 00:34:25.029442 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp"] Feb 20 00:34:25 crc kubenswrapper[5107]: I0220 00:34:25.065570 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tsv8l" podStartSLOduration=5.994119321 podStartE2EDuration="9.065549332s" podCreationTimestamp="2026-02-20 00:34:16 +0000 UTC" firstStartedPulling="2026-02-20 00:34:19.096314871 +0000 UTC m=+1545.464972437" lastFinishedPulling="2026-02-20 00:34:22.167744882 +0000 UTC m=+1548.536402448" observedRunningTime="2026-02-20 00:34:25.063625308 +0000 UTC m=+1551.432282874" watchObservedRunningTime="2026-02-20 00:34:25.065549332 +0000 UTC m=+1551.434206908" Feb 20 00:34:25 crc kubenswrapper[5107]: I0220 00:34:25.095701 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb9t4\" (UniqueName: \"kubernetes.io/projected/9b0e9a7d-426a-4e9c-aa89-e2705b7002c6-kube-api-access-rb9t4\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp\" (UID: \"9b0e9a7d-426a-4e9c-aa89-e2705b7002c6\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp" Feb 20 00:34:25 crc kubenswrapper[5107]: I0220 00:34:25.095873 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/9b0e9a7d-426a-4e9c-aa89-e2705b7002c6-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp\" (UID: \"9b0e9a7d-426a-4e9c-aa89-e2705b7002c6\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp" Feb 20 00:34:25 crc kubenswrapper[5107]: I0220 00:34:25.095915 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9b0e9a7d-426a-4e9c-aa89-e2705b7002c6-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp\" (UID: \"9b0e9a7d-426a-4e9c-aa89-e2705b7002c6\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp" Feb 20 00:34:25 crc kubenswrapper[5107]: I0220 00:34:25.096247 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/9b0e9a7d-426a-4e9c-aa89-e2705b7002c6-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp\" (UID: \"9b0e9a7d-426a-4e9c-aa89-e2705b7002c6\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp" Feb 20 00:34:25 crc kubenswrapper[5107]: I0220 00:34:25.096327 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9b0e9a7d-426a-4e9c-aa89-e2705b7002c6-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp\" (UID: \"9b0e9a7d-426a-4e9c-aa89-e2705b7002c6\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp" Feb 20 00:34:25 crc kubenswrapper[5107]: I0220 00:34:25.196998 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rb9t4\" (UniqueName: \"kubernetes.io/projected/9b0e9a7d-426a-4e9c-aa89-e2705b7002c6-kube-api-access-rb9t4\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp\" (UID: \"9b0e9a7d-426a-4e9c-aa89-e2705b7002c6\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp" Feb 20 00:34:25 crc kubenswrapper[5107]: I0220 00:34:25.197042 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/9b0e9a7d-426a-4e9c-aa89-e2705b7002c6-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp\" (UID: \"9b0e9a7d-426a-4e9c-aa89-e2705b7002c6\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp" Feb 20 00:34:25 crc kubenswrapper[5107]: I0220 00:34:25.197062 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9b0e9a7d-426a-4e9c-aa89-e2705b7002c6-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp\" (UID: \"9b0e9a7d-426a-4e9c-aa89-e2705b7002c6\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp" Feb 20 00:34:25 crc kubenswrapper[5107]: I0220 00:34:25.197117 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/9b0e9a7d-426a-4e9c-aa89-e2705b7002c6-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp\" (UID: \"9b0e9a7d-426a-4e9c-aa89-e2705b7002c6\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp" Feb 20 00:34:25 crc kubenswrapper[5107]: I0220 00:34:25.197143 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9b0e9a7d-426a-4e9c-aa89-e2705b7002c6-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp\" (UID: \"9b0e9a7d-426a-4e9c-aa89-e2705b7002c6\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp" Feb 20 00:34:25 crc kubenswrapper[5107]: I0220 00:34:25.198385 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9b0e9a7d-426a-4e9c-aa89-e2705b7002c6-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp\" (UID: \"9b0e9a7d-426a-4e9c-aa89-e2705b7002c6\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp" Feb 20 00:34:25 crc kubenswrapper[5107]: I0220 00:34:25.198594 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/9b0e9a7d-426a-4e9c-aa89-e2705b7002c6-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp\" (UID: \"9b0e9a7d-426a-4e9c-aa89-e2705b7002c6\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp" Feb 20 00:34:25 crc kubenswrapper[5107]: I0220 00:34:25.204039 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/9b0e9a7d-426a-4e9c-aa89-e2705b7002c6-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp\" (UID: \"9b0e9a7d-426a-4e9c-aa89-e2705b7002c6\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp" Feb 20 00:34:25 crc kubenswrapper[5107]: I0220 00:34:25.204796 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9b0e9a7d-426a-4e9c-aa89-e2705b7002c6-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp\" (UID: \"9b0e9a7d-426a-4e9c-aa89-e2705b7002c6\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp" Feb 20 00:34:25 crc kubenswrapper[5107]: I0220 00:34:25.220854 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb9t4\" (UniqueName: \"kubernetes.io/projected/9b0e9a7d-426a-4e9c-aa89-e2705b7002c6-kube-api-access-rb9t4\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp\" (UID: \"9b0e9a7d-426a-4e9c-aa89-e2705b7002c6\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp" Feb 20 00:34:25 crc kubenswrapper[5107]: I0220 00:34:25.337934 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp" Feb 20 00:34:25 crc kubenswrapper[5107]: I0220 00:34:25.818078 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp"] Feb 20 00:34:26 crc kubenswrapper[5107]: I0220 00:34:26.181880 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp" event={"ID":"9b0e9a7d-426a-4e9c-aa89-e2705b7002c6","Type":"ContainerStarted","Data":"c19bd7659de69432b140f5fb87630138ea087ebd0ceb498a34ae11f88076b8bf"} Feb 20 00:34:26 crc kubenswrapper[5107]: I0220 00:34:26.919496 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tsv8l" Feb 20 00:34:26 crc kubenswrapper[5107]: I0220 00:34:26.919542 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-tsv8l" Feb 20 00:34:27 crc kubenswrapper[5107]: I0220 00:34:27.979552 5107 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tsv8l" podUID="3625c616-e793-4cdd-ba35-67b1401611a5" containerName="registry-server" probeResult="failure" output=< Feb 20 00:34:27 crc kubenswrapper[5107]: timeout: failed to connect service ":50051" within 1s Feb 20 00:34:27 crc kubenswrapper[5107]: > Feb 20 00:34:28 crc kubenswrapper[5107]: I0220 00:34:28.933288 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp"] Feb 20 00:34:28 crc kubenswrapper[5107]: I0220 00:34:28.957548 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp"] Feb 20 00:34:28 crc kubenswrapper[5107]: I0220 00:34:28.957627 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp" Feb 20 00:34:28 crc kubenswrapper[5107]: I0220 00:34:28.962668 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-sg-core-configmap\"" Feb 20 00:34:28 crc kubenswrapper[5107]: I0220 00:34:28.962712 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-proxy-tls\"" Feb 20 00:34:29 crc kubenswrapper[5107]: I0220 00:34:29.077923 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb5s2\" (UniqueName: \"kubernetes.io/projected/edda94c7-f2b0-4139-be2f-13ca1e43f3fe-kube-api-access-hb5s2\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp\" (UID: \"edda94c7-f2b0-4139-be2f-13ca1e43f3fe\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp" Feb 20 00:34:29 crc kubenswrapper[5107]: I0220 00:34:29.078165 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/edda94c7-f2b0-4139-be2f-13ca1e43f3fe-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp\" (UID: \"edda94c7-f2b0-4139-be2f-13ca1e43f3fe\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp" Feb 20 00:34:29 crc kubenswrapper[5107]: I0220 00:34:29.078370 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/edda94c7-f2b0-4139-be2f-13ca1e43f3fe-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp\" (UID: \"edda94c7-f2b0-4139-be2f-13ca1e43f3fe\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp" Feb 20 00:34:29 crc kubenswrapper[5107]: I0220 00:34:29.078442 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/edda94c7-f2b0-4139-be2f-13ca1e43f3fe-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp\" (UID: \"edda94c7-f2b0-4139-be2f-13ca1e43f3fe\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp" Feb 20 00:34:29 crc kubenswrapper[5107]: I0220 00:34:29.078736 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/edda94c7-f2b0-4139-be2f-13ca1e43f3fe-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp\" (UID: \"edda94c7-f2b0-4139-be2f-13ca1e43f3fe\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp" Feb 20 00:34:29 crc kubenswrapper[5107]: I0220 00:34:29.179773 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/edda94c7-f2b0-4139-be2f-13ca1e43f3fe-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp\" (UID: \"edda94c7-f2b0-4139-be2f-13ca1e43f3fe\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp" Feb 20 00:34:29 crc kubenswrapper[5107]: I0220 00:34:29.179819 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/edda94c7-f2b0-4139-be2f-13ca1e43f3fe-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp\" (UID: \"edda94c7-f2b0-4139-be2f-13ca1e43f3fe\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp" Feb 20 00:34:29 crc kubenswrapper[5107]: I0220 00:34:29.179883 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/edda94c7-f2b0-4139-be2f-13ca1e43f3fe-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp\" (UID: \"edda94c7-f2b0-4139-be2f-13ca1e43f3fe\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp" Feb 20 00:34:29 crc kubenswrapper[5107]: I0220 00:34:29.179922 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hb5s2\" (UniqueName: \"kubernetes.io/projected/edda94c7-f2b0-4139-be2f-13ca1e43f3fe-kube-api-access-hb5s2\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp\" (UID: \"edda94c7-f2b0-4139-be2f-13ca1e43f3fe\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp" Feb 20 00:34:29 crc kubenswrapper[5107]: I0220 00:34:29.179946 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/edda94c7-f2b0-4139-be2f-13ca1e43f3fe-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp\" (UID: \"edda94c7-f2b0-4139-be2f-13ca1e43f3fe\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp" Feb 20 00:34:29 crc kubenswrapper[5107]: E0220 00:34:29.180058 5107 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Feb 20 00:34:29 crc kubenswrapper[5107]: E0220 00:34:29.180111 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/edda94c7-f2b0-4139-be2f-13ca1e43f3fe-default-cloud1-sens-meter-proxy-tls podName:edda94c7-f2b0-4139-be2f-13ca1e43f3fe nodeName:}" failed. No retries permitted until 2026-02-20 00:34:29.680093886 +0000 UTC m=+1556.048751452 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/edda94c7-f2b0-4139-be2f-13ca1e43f3fe-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp" (UID: "edda94c7-f2b0-4139-be2f-13ca1e43f3fe") : secret "default-cloud1-sens-meter-proxy-tls" not found Feb 20 00:34:29 crc kubenswrapper[5107]: I0220 00:34:29.182803 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/edda94c7-f2b0-4139-be2f-13ca1e43f3fe-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp\" (UID: \"edda94c7-f2b0-4139-be2f-13ca1e43f3fe\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp" Feb 20 00:34:29 crc kubenswrapper[5107]: I0220 00:34:29.183156 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/edda94c7-f2b0-4139-be2f-13ca1e43f3fe-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp\" (UID: \"edda94c7-f2b0-4139-be2f-13ca1e43f3fe\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp" Feb 20 00:34:29 crc kubenswrapper[5107]: I0220 00:34:29.208147 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/edda94c7-f2b0-4139-be2f-13ca1e43f3fe-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp\" (UID: \"edda94c7-f2b0-4139-be2f-13ca1e43f3fe\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp" Feb 20 00:34:29 crc kubenswrapper[5107]: I0220 00:34:29.214911 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb5s2\" (UniqueName: \"kubernetes.io/projected/edda94c7-f2b0-4139-be2f-13ca1e43f3fe-kube-api-access-hb5s2\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp\" (UID: \"edda94c7-f2b0-4139-be2f-13ca1e43f3fe\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp" Feb 20 00:34:29 crc kubenswrapper[5107]: I0220 00:34:29.686440 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/edda94c7-f2b0-4139-be2f-13ca1e43f3fe-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp\" (UID: \"edda94c7-f2b0-4139-be2f-13ca1e43f3fe\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp" Feb 20 00:34:29 crc kubenswrapper[5107]: E0220 00:34:29.686621 5107 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Feb 20 00:34:29 crc kubenswrapper[5107]: E0220 00:34:29.686704 5107 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/edda94c7-f2b0-4139-be2f-13ca1e43f3fe-default-cloud1-sens-meter-proxy-tls podName:edda94c7-f2b0-4139-be2f-13ca1e43f3fe nodeName:}" failed. No retries permitted until 2026-02-20 00:34:30.68668189 +0000 UTC m=+1557.055339456 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/edda94c7-f2b0-4139-be2f-13ca1e43f3fe-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp" (UID: "edda94c7-f2b0-4139-be2f-13ca1e43f3fe") : secret "default-cloud1-sens-meter-proxy-tls" not found Feb 20 00:34:30 crc kubenswrapper[5107]: I0220 00:34:30.700679 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/edda94c7-f2b0-4139-be2f-13ca1e43f3fe-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp\" (UID: \"edda94c7-f2b0-4139-be2f-13ca1e43f3fe\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp" Feb 20 00:34:30 crc kubenswrapper[5107]: I0220 00:34:30.716912 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/edda94c7-f2b0-4139-be2f-13ca1e43f3fe-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp\" (UID: \"edda94c7-f2b0-4139-be2f-13ca1e43f3fe\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp" Feb 20 00:34:30 crc kubenswrapper[5107]: I0220 00:34:30.774546 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp" Feb 20 00:34:32 crc kubenswrapper[5107]: I0220 00:34:32.826283 5107 patch_prober.go:28] interesting pod/machine-config-daemon-5bqkx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:34:32 crc kubenswrapper[5107]: I0220 00:34:32.826723 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:34:32 crc kubenswrapper[5107]: I0220 00:34:32.826789 5107 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" Feb 20 00:34:32 crc kubenswrapper[5107]: I0220 00:34:32.827568 5107 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"caab76c5ecaf8c514147de4913329952f9a899f93a1beb03729a59c67867fcdc"} pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 20 00:34:32 crc kubenswrapper[5107]: I0220 00:34:32.827649 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" containerName="machine-config-daemon" containerID="cri-o://caab76c5ecaf8c514147de4913329952f9a899f93a1beb03729a59c67867fcdc" gracePeriod=600 Feb 20 00:34:33 crc kubenswrapper[5107]: I0220 00:34:33.269917 5107 generic.go:358] "Generic (PLEG): container finished" podID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" containerID="caab76c5ecaf8c514147de4913329952f9a899f93a1beb03729a59c67867fcdc" exitCode=0 Feb 20 00:34:33 crc kubenswrapper[5107]: I0220 00:34:33.270087 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" event={"ID":"2a8cc693-438e-4d3b-8865-7d3907f9dc78","Type":"ContainerDied","Data":"caab76c5ecaf8c514147de4913329952f9a899f93a1beb03729a59c67867fcdc"} Feb 20 00:34:33 crc kubenswrapper[5107]: I0220 00:34:33.270128 5107 scope.go:117] "RemoveContainer" containerID="27058816c5b0f1e08873805991c4c60e645930a52858b50fbcf44e8cd21dad6f" Feb 20 00:34:33 crc kubenswrapper[5107]: I0220 00:34:33.650711 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp"] Feb 20 00:34:33 crc kubenswrapper[5107]: W0220 00:34:33.662505 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedda94c7_f2b0_4139_be2f_13ca1e43f3fe.slice/crio-7d10dfa8d0fbde69587a5203090295ca470f2a43d2c7b09657181aa50b644c15 WatchSource:0}: Error finding container 7d10dfa8d0fbde69587a5203090295ca470f2a43d2c7b09657181aa50b644c15: Status 404 returned error can't find the container with id 7d10dfa8d0fbde69587a5203090295ca470f2a43d2c7b09657181aa50b644c15 Feb 20 00:34:34 crc kubenswrapper[5107]: I0220 00:34:34.281622 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"df14b034-8115-4790-870b-81499599ef18","Type":"ContainerStarted","Data":"68c153f6b28e3c1f61532456e358119e4409ebbe3f94b0d072d9cfa7aa27f536"} Feb 20 00:34:34 crc kubenswrapper[5107]: I0220 00:34:34.286035 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp" event={"ID":"edda94c7-f2b0-4139-be2f-13ca1e43f3fe","Type":"ContainerStarted","Data":"7d10dfa8d0fbde69587a5203090295ca470f2a43d2c7b09657181aa50b644c15"} Feb 20 00:34:34 crc kubenswrapper[5107]: I0220 00:34:34.289033 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2" event={"ID":"33411a6e-af33-4251-9cb8-b6585faf9f3d","Type":"ContainerStarted","Data":"1fcfe21db5ddfa8b67bbfcbf65d934aae6f129721a9a9841085d08734dbd947f"} Feb 20 00:34:34 crc kubenswrapper[5107]: I0220 00:34:34.291971 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp" event={"ID":"9b0e9a7d-426a-4e9c-aa89-e2705b7002c6","Type":"ContainerStarted","Data":"d1a59ab8bd498ba3a97cbff9fb04d8916fab990905a8391baba1bd3b53aff149"} Feb 20 00:34:34 crc kubenswrapper[5107]: I0220 00:34:34.299904 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" event={"ID":"2a8cc693-438e-4d3b-8865-7d3907f9dc78","Type":"ContainerStarted","Data":"1412f0b94e946f14beed198d864b314c0357f03a31e892ba17011c0b23e31777"} Feb 20 00:34:34 crc kubenswrapper[5107]: I0220 00:34:34.308909 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=4.895286792 podStartE2EDuration="41.308895508s" podCreationTimestamp="2026-02-20 00:33:53 +0000 UTC" firstStartedPulling="2026-02-20 00:33:56.973775753 +0000 UTC m=+1523.342433319" lastFinishedPulling="2026-02-20 00:34:33.387384459 +0000 UTC m=+1559.756042035" observedRunningTime="2026-02-20 00:34:34.304632838 +0000 UTC m=+1560.673290394" watchObservedRunningTime="2026-02-20 00:34:34.308895508 +0000 UTC m=+1560.677553074" Feb 20 00:34:34 crc kubenswrapper[5107]: I0220 00:34:34.311544 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"4ce5a4a3-1668-4973-a16a-6401a9a4b472","Type":"ContainerStarted","Data":"7f9ba0110e0c4b40778441b9a82c95a584613e745d15e7e7b1d07e7eaaa91d05"} Feb 20 00:34:35 crc kubenswrapper[5107]: I0220 00:34:35.340614 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2" event={"ID":"33411a6e-af33-4251-9cb8-b6585faf9f3d","Type":"ContainerStarted","Data":"11940db63a687c77fb75d6721254f2feed2be9e634bed57dcbcfa754992b544a"} Feb 20 00:34:35 crc kubenswrapper[5107]: I0220 00:34:35.345098 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp" event={"ID":"9b0e9a7d-426a-4e9c-aa89-e2705b7002c6","Type":"ContainerStarted","Data":"892e23efc4d0e384defbcbecab78ac86513a6a42cb2fc0018400de8cc4ed20fe"} Feb 20 00:34:35 crc kubenswrapper[5107]: I0220 00:34:35.358911 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp" event={"ID":"edda94c7-f2b0-4139-be2f-13ca1e43f3fe","Type":"ContainerStarted","Data":"d018a6de3617cfe75f1d6c65c1547e753863e910133561a879d3eda3f3a7c283"} Feb 20 00:34:35 crc kubenswrapper[5107]: I0220 00:34:35.358948 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp" event={"ID":"edda94c7-f2b0-4139-be2f-13ca1e43f3fe","Type":"ContainerStarted","Data":"187f57bb851dbb3f3c9e2e9e5a69714e2dbfa7721c4762c4e2602e3236830913"} Feb 20 00:34:35 crc kubenswrapper[5107]: I0220 00:34:35.500640 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz"] Feb 20 00:34:35 crc kubenswrapper[5107]: I0220 00:34:35.509545 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz" Feb 20 00:34:35 crc kubenswrapper[5107]: I0220 00:34:35.511153 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-cert\"" Feb 20 00:34:35 crc kubenswrapper[5107]: I0220 00:34:35.511391 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-event-sg-core-configmap\"" Feb 20 00:34:35 crc kubenswrapper[5107]: I0220 00:34:35.513043 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz"] Feb 20 00:34:35 crc kubenswrapper[5107]: I0220 00:34:35.610744 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/94555f83-7ba4-4142-b52e-e8f11bb77c06-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz\" (UID: \"94555f83-7ba4-4142-b52e-e8f11bb77c06\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz" Feb 20 00:34:35 crc kubenswrapper[5107]: I0220 00:34:35.610788 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pr4k\" (UniqueName: \"kubernetes.io/projected/94555f83-7ba4-4142-b52e-e8f11bb77c06-kube-api-access-9pr4k\") pod \"default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz\" (UID: \"94555f83-7ba4-4142-b52e-e8f11bb77c06\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz" Feb 20 00:34:35 crc kubenswrapper[5107]: I0220 00:34:35.610845 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/94555f83-7ba4-4142-b52e-e8f11bb77c06-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz\" (UID: \"94555f83-7ba4-4142-b52e-e8f11bb77c06\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz" Feb 20 00:34:35 crc kubenswrapper[5107]: I0220 00:34:35.610901 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/94555f83-7ba4-4142-b52e-e8f11bb77c06-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz\" (UID: \"94555f83-7ba4-4142-b52e-e8f11bb77c06\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz" Feb 20 00:34:35 crc kubenswrapper[5107]: I0220 00:34:35.712646 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/94555f83-7ba4-4142-b52e-e8f11bb77c06-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz\" (UID: \"94555f83-7ba4-4142-b52e-e8f11bb77c06\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz" Feb 20 00:34:35 crc kubenswrapper[5107]: I0220 00:34:35.712685 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9pr4k\" (UniqueName: \"kubernetes.io/projected/94555f83-7ba4-4142-b52e-e8f11bb77c06-kube-api-access-9pr4k\") pod \"default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz\" (UID: \"94555f83-7ba4-4142-b52e-e8f11bb77c06\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz" Feb 20 00:34:35 crc kubenswrapper[5107]: I0220 00:34:35.712744 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/94555f83-7ba4-4142-b52e-e8f11bb77c06-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz\" (UID: \"94555f83-7ba4-4142-b52e-e8f11bb77c06\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz" Feb 20 00:34:35 crc kubenswrapper[5107]: I0220 00:34:35.713566 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/94555f83-7ba4-4142-b52e-e8f11bb77c06-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz\" (UID: \"94555f83-7ba4-4142-b52e-e8f11bb77c06\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz" Feb 20 00:34:35 crc kubenswrapper[5107]: I0220 00:34:35.713982 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/94555f83-7ba4-4142-b52e-e8f11bb77c06-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz\" (UID: \"94555f83-7ba4-4142-b52e-e8f11bb77c06\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz" Feb 20 00:34:35 crc kubenswrapper[5107]: I0220 00:34:35.714359 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/94555f83-7ba4-4142-b52e-e8f11bb77c06-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz\" (UID: \"94555f83-7ba4-4142-b52e-e8f11bb77c06\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz" Feb 20 00:34:35 crc kubenswrapper[5107]: I0220 00:34:35.719803 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/94555f83-7ba4-4142-b52e-e8f11bb77c06-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz\" (UID: \"94555f83-7ba4-4142-b52e-e8f11bb77c06\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz" Feb 20 00:34:35 crc kubenswrapper[5107]: I0220 00:34:35.728860 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pr4k\" (UniqueName: \"kubernetes.io/projected/94555f83-7ba4-4142-b52e-e8f11bb77c06-kube-api-access-9pr4k\") pod \"default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz\" (UID: \"94555f83-7ba4-4142-b52e-e8f11bb77c06\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz" Feb 20 00:34:35 crc kubenswrapper[5107]: I0220 00:34:35.832107 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz" Feb 20 00:34:36 crc kubenswrapper[5107]: I0220 00:34:36.230659 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd"] Feb 20 00:34:36 crc kubenswrapper[5107]: I0220 00:34:36.254081 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd"] Feb 20 00:34:36 crc kubenswrapper[5107]: I0220 00:34:36.254211 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd" Feb 20 00:34:36 crc kubenswrapper[5107]: I0220 00:34:36.256462 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-event-sg-core-configmap\"" Feb 20 00:34:36 crc kubenswrapper[5107]: I0220 00:34:36.268527 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz"] Feb 20 00:34:36 crc kubenswrapper[5107]: W0220 00:34:36.282193 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94555f83_7ba4_4142_b52e_e8f11bb77c06.slice/crio-0245cd7cd2f433120c11adf18c28bdd827757a22f753d9245e5f02f8ef432035 WatchSource:0}: Error finding container 0245cd7cd2f433120c11adf18c28bdd827757a22f753d9245e5f02f8ef432035: Status 404 returned error can't find the container with id 0245cd7cd2f433120c11adf18c28bdd827757a22f753d9245e5f02f8ef432035 Feb 20 00:34:36 crc kubenswrapper[5107]: I0220 00:34:36.329227 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/0b694ae3-ffed-421b-be35-02328f0a54af-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd\" (UID: \"0b694ae3-ffed-421b-be35-02328f0a54af\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd" Feb 20 00:34:36 crc kubenswrapper[5107]: I0220 00:34:36.329271 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/0b694ae3-ffed-421b-be35-02328f0a54af-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd\" (UID: \"0b694ae3-ffed-421b-be35-02328f0a54af\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd" Feb 20 00:34:36 crc kubenswrapper[5107]: I0220 00:34:36.329372 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/0b694ae3-ffed-421b-be35-02328f0a54af-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd\" (UID: \"0b694ae3-ffed-421b-be35-02328f0a54af\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd" Feb 20 00:34:36 crc kubenswrapper[5107]: I0220 00:34:36.329570 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8bmp\" (UniqueName: \"kubernetes.io/projected/0b694ae3-ffed-421b-be35-02328f0a54af-kube-api-access-p8bmp\") pod \"default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd\" (UID: \"0b694ae3-ffed-421b-be35-02328f0a54af\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd" Feb 20 00:34:36 crc kubenswrapper[5107]: I0220 00:34:36.372623 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz" event={"ID":"94555f83-7ba4-4142-b52e-e8f11bb77c06","Type":"ContainerStarted","Data":"0245cd7cd2f433120c11adf18c28bdd827757a22f753d9245e5f02f8ef432035"} Feb 20 00:34:36 crc kubenswrapper[5107]: I0220 00:34:36.374866 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"4ce5a4a3-1668-4973-a16a-6401a9a4b472","Type":"ContainerStarted","Data":"90fe78c6d308b467a34bc876dac2a3af501b56a2991114097ca069be1fc6993e"} Feb 20 00:34:36 crc kubenswrapper[5107]: I0220 00:34:36.374887 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"4ce5a4a3-1668-4973-a16a-6401a9a4b472","Type":"ContainerStarted","Data":"2daa25cca9571072e07c6b2c35ec69e828d3af4338b6484f9bf3e92d16e1cadd"} Feb 20 00:34:36 crc kubenswrapper[5107]: I0220 00:34:36.398084 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=17.712952354 podStartE2EDuration="30.398069662s" podCreationTimestamp="2026-02-20 00:34:06 +0000 UTC" firstStartedPulling="2026-02-20 00:34:23.141948238 +0000 UTC m=+1549.510605804" lastFinishedPulling="2026-02-20 00:34:35.827065546 +0000 UTC m=+1562.195723112" observedRunningTime="2026-02-20 00:34:36.39658967 +0000 UTC m=+1562.765247226" watchObservedRunningTime="2026-02-20 00:34:36.398069662 +0000 UTC m=+1562.766727228" Feb 20 00:34:36 crc kubenswrapper[5107]: I0220 00:34:36.431468 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/0b694ae3-ffed-421b-be35-02328f0a54af-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd\" (UID: \"0b694ae3-ffed-421b-be35-02328f0a54af\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd" Feb 20 00:34:36 crc kubenswrapper[5107]: I0220 00:34:36.431525 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/0b694ae3-ffed-421b-be35-02328f0a54af-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd\" (UID: \"0b694ae3-ffed-421b-be35-02328f0a54af\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd" Feb 20 00:34:36 crc kubenswrapper[5107]: I0220 00:34:36.431562 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/0b694ae3-ffed-421b-be35-02328f0a54af-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd\" (UID: \"0b694ae3-ffed-421b-be35-02328f0a54af\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd" Feb 20 00:34:36 crc kubenswrapper[5107]: I0220 00:34:36.431692 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p8bmp\" (UniqueName: \"kubernetes.io/projected/0b694ae3-ffed-421b-be35-02328f0a54af-kube-api-access-p8bmp\") pod \"default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd\" (UID: \"0b694ae3-ffed-421b-be35-02328f0a54af\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd" Feb 20 00:34:36 crc kubenswrapper[5107]: I0220 00:34:36.433498 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/0b694ae3-ffed-421b-be35-02328f0a54af-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd\" (UID: \"0b694ae3-ffed-421b-be35-02328f0a54af\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd" Feb 20 00:34:36 crc kubenswrapper[5107]: I0220 00:34:36.434798 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/0b694ae3-ffed-421b-be35-02328f0a54af-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd\" (UID: \"0b694ae3-ffed-421b-be35-02328f0a54af\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd" Feb 20 00:34:36 crc kubenswrapper[5107]: I0220 00:34:36.451278 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/0b694ae3-ffed-421b-be35-02328f0a54af-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd\" (UID: \"0b694ae3-ffed-421b-be35-02328f0a54af\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd" Feb 20 00:34:36 crc kubenswrapper[5107]: I0220 00:34:36.455470 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8bmp\" (UniqueName: \"kubernetes.io/projected/0b694ae3-ffed-421b-be35-02328f0a54af-kube-api-access-p8bmp\") pod \"default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd\" (UID: \"0b694ae3-ffed-421b-be35-02328f0a54af\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd" Feb 20 00:34:36 crc kubenswrapper[5107]: I0220 00:34:36.510308 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/prometheus-default-0" Feb 20 00:34:36 crc kubenswrapper[5107]: I0220 00:34:36.596629 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd" Feb 20 00:34:36 crc kubenswrapper[5107]: I0220 00:34:36.963030 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tsv8l" Feb 20 00:34:37 crc kubenswrapper[5107]: I0220 00:34:37.015308 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tsv8l" Feb 20 00:34:37 crc kubenswrapper[5107]: I0220 00:34:37.112733 5107 scope.go:117] "RemoveContainer" containerID="0ae97859dc5da33b25e25cbcf9b98377baca5a6f4450291f5ed9e06c0f34ddc2" Feb 20 00:34:37 crc kubenswrapper[5107]: I0220 00:34:37.189228 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tsv8l"] Feb 20 00:34:37 crc kubenswrapper[5107]: I0220 00:34:37.384191 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz" event={"ID":"94555f83-7ba4-4142-b52e-e8f11bb77c06","Type":"ContainerStarted","Data":"78b7ec0486a617e5f9bbdcd062ac97110783db82e2317d5f0fb38f00aa09252d"} Feb 20 00:34:38 crc kubenswrapper[5107]: I0220 00:34:38.392559 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tsv8l" podUID="3625c616-e793-4cdd-ba35-67b1401611a5" containerName="registry-server" containerID="cri-o://20791129120f1a1fddfaf439a2353f4d4abd99bc26ab7d0a3856f53c00b50a12" gracePeriod=2 Feb 20 00:34:39 crc kubenswrapper[5107]: I0220 00:34:39.404502 5107 generic.go:358] "Generic (PLEG): container finished" podID="3625c616-e793-4cdd-ba35-67b1401611a5" containerID="20791129120f1a1fddfaf439a2353f4d4abd99bc26ab7d0a3856f53c00b50a12" exitCode=0 Feb 20 00:34:39 crc kubenswrapper[5107]: I0220 00:34:39.404594 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsv8l" event={"ID":"3625c616-e793-4cdd-ba35-67b1401611a5","Type":"ContainerDied","Data":"20791129120f1a1fddfaf439a2353f4d4abd99bc26ab7d0a3856f53c00b50a12"} Feb 20 00:34:40 crc kubenswrapper[5107]: I0220 00:34:40.222358 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tsv8l" Feb 20 00:34:40 crc kubenswrapper[5107]: I0220 00:34:40.287807 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3625c616-e793-4cdd-ba35-67b1401611a5-catalog-content\") pod \"3625c616-e793-4cdd-ba35-67b1401611a5\" (UID: \"3625c616-e793-4cdd-ba35-67b1401611a5\") " Feb 20 00:34:40 crc kubenswrapper[5107]: I0220 00:34:40.287922 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdrq5\" (UniqueName: \"kubernetes.io/projected/3625c616-e793-4cdd-ba35-67b1401611a5-kube-api-access-kdrq5\") pod \"3625c616-e793-4cdd-ba35-67b1401611a5\" (UID: \"3625c616-e793-4cdd-ba35-67b1401611a5\") " Feb 20 00:34:40 crc kubenswrapper[5107]: I0220 00:34:40.287946 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3625c616-e793-4cdd-ba35-67b1401611a5-utilities\") pod \"3625c616-e793-4cdd-ba35-67b1401611a5\" (UID: \"3625c616-e793-4cdd-ba35-67b1401611a5\") " Feb 20 00:34:40 crc kubenswrapper[5107]: I0220 00:34:40.288794 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3625c616-e793-4cdd-ba35-67b1401611a5-utilities" (OuterVolumeSpecName: "utilities") pod "3625c616-e793-4cdd-ba35-67b1401611a5" (UID: "3625c616-e793-4cdd-ba35-67b1401611a5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:34:40 crc kubenswrapper[5107]: I0220 00:34:40.301599 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3625c616-e793-4cdd-ba35-67b1401611a5-kube-api-access-kdrq5" (OuterVolumeSpecName: "kube-api-access-kdrq5") pod "3625c616-e793-4cdd-ba35-67b1401611a5" (UID: "3625c616-e793-4cdd-ba35-67b1401611a5"). InnerVolumeSpecName "kube-api-access-kdrq5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:34:40 crc kubenswrapper[5107]: I0220 00:34:40.389864 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kdrq5\" (UniqueName: \"kubernetes.io/projected/3625c616-e793-4cdd-ba35-67b1401611a5-kube-api-access-kdrq5\") on node \"crc\" DevicePath \"\"" Feb 20 00:34:40 crc kubenswrapper[5107]: I0220 00:34:40.389894 5107 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3625c616-e793-4cdd-ba35-67b1401611a5-utilities\") on node \"crc\" DevicePath \"\"" Feb 20 00:34:40 crc kubenswrapper[5107]: I0220 00:34:40.412635 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp" event={"ID":"edda94c7-f2b0-4139-be2f-13ca1e43f3fe","Type":"ContainerStarted","Data":"7835300dd477679e1fe99fb5aec299e92feae9d7c1eee49090568ce38b48ecfe"} Feb 20 00:34:40 crc kubenswrapper[5107]: I0220 00:34:40.414077 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3625c616-e793-4cdd-ba35-67b1401611a5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3625c616-e793-4cdd-ba35-67b1401611a5" (UID: "3625c616-e793-4cdd-ba35-67b1401611a5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:34:40 crc kubenswrapper[5107]: I0220 00:34:40.414247 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz" event={"ID":"94555f83-7ba4-4142-b52e-e8f11bb77c06","Type":"ContainerStarted","Data":"5204773a72716d4a3e259d69b47b0a75e7cd8b893e4e1a321d3675f73f766c18"} Feb 20 00:34:40 crc kubenswrapper[5107]: I0220 00:34:40.416134 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2" event={"ID":"33411a6e-af33-4251-9cb8-b6585faf9f3d","Type":"ContainerStarted","Data":"421adfe527b549ded37b303f688b9cf74d747ef917563d00a3980133667e886d"} Feb 20 00:34:40 crc kubenswrapper[5107]: I0220 00:34:40.417874 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp" event={"ID":"9b0e9a7d-426a-4e9c-aa89-e2705b7002c6","Type":"ContainerStarted","Data":"c364bc27fc9e6018bb6f8c0b80db2635694085349ae1cfc9d86003e5136cf164"} Feb 20 00:34:40 crc kubenswrapper[5107]: I0220 00:34:40.420174 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsv8l" event={"ID":"3625c616-e793-4cdd-ba35-67b1401611a5","Type":"ContainerDied","Data":"3bb34dc0b65f2686c16e24cb25947ff546a8963192b1649a0c1fa5f0dbc84c4f"} Feb 20 00:34:40 crc kubenswrapper[5107]: I0220 00:34:40.420214 5107 scope.go:117] "RemoveContainer" containerID="20791129120f1a1fddfaf439a2353f4d4abd99bc26ab7d0a3856f53c00b50a12" Feb 20 00:34:40 crc kubenswrapper[5107]: I0220 00:34:40.420322 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tsv8l" Feb 20 00:34:40 crc kubenswrapper[5107]: I0220 00:34:40.463971 5107 scope.go:117] "RemoveContainer" containerID="305e16d2dcbd8b19bf4fd940c6bc8e31acb9dfa33744043251c0d93d1257d079" Feb 20 00:34:40 crc kubenswrapper[5107]: I0220 00:34:40.475745 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp" podStartSLOduration=5.9954589689999995 podStartE2EDuration="12.475727037s" podCreationTimestamp="2026-02-20 00:34:28 +0000 UTC" firstStartedPulling="2026-02-20 00:34:33.665011851 +0000 UTC m=+1560.033669417" lastFinishedPulling="2026-02-20 00:34:40.145279919 +0000 UTC m=+1566.513937485" observedRunningTime="2026-02-20 00:34:40.425973278 +0000 UTC m=+1566.794630834" watchObservedRunningTime="2026-02-20 00:34:40.475727037 +0000 UTC m=+1566.844384603" Feb 20 00:34:40 crc kubenswrapper[5107]: I0220 00:34:40.476722 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz" podStartSLOduration=1.697436526 podStartE2EDuration="5.476717735s" podCreationTimestamp="2026-02-20 00:34:35 +0000 UTC" firstStartedPulling="2026-02-20 00:34:36.286569635 +0000 UTC m=+1562.655227191" lastFinishedPulling="2026-02-20 00:34:40.065850834 +0000 UTC m=+1566.434508400" observedRunningTime="2026-02-20 00:34:40.44027016 +0000 UTC m=+1566.808927726" watchObservedRunningTime="2026-02-20 00:34:40.476717735 +0000 UTC m=+1566.845375301" Feb 20 00:34:40 crc kubenswrapper[5107]: I0220 00:34:40.492496 5107 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3625c616-e793-4cdd-ba35-67b1401611a5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 20 00:34:40 crc kubenswrapper[5107]: W0220 00:34:40.494087 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b694ae3_ffed_421b_be35_02328f0a54af.slice/crio-ff26457b144739016e5d2b0466093de3e024085f485cc68340876f05a3ad28ff WatchSource:0}: Error finding container ff26457b144739016e5d2b0466093de3e024085f485cc68340876f05a3ad28ff: Status 404 returned error can't find the container with id ff26457b144739016e5d2b0466093de3e024085f485cc68340876f05a3ad28ff Feb 20 00:34:40 crc kubenswrapper[5107]: I0220 00:34:40.498397 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2" podStartSLOduration=3.156741492 podStartE2EDuration="19.498374025s" podCreationTimestamp="2026-02-20 00:34:21 +0000 UTC" firstStartedPulling="2026-02-20 00:34:23.725364944 +0000 UTC m=+1550.094022510" lastFinishedPulling="2026-02-20 00:34:40.066997477 +0000 UTC m=+1566.435655043" observedRunningTime="2026-02-20 00:34:40.458785211 +0000 UTC m=+1566.827442777" watchObservedRunningTime="2026-02-20 00:34:40.498374025 +0000 UTC m=+1566.867031591" Feb 20 00:34:40 crc kubenswrapper[5107]: I0220 00:34:40.499184 5107 scope.go:117] "RemoveContainer" containerID="2729106c6ca1692e8b089e65457c80c260255bf1023dcac38ed9e8f442dc08e9" Feb 20 00:34:40 crc kubenswrapper[5107]: I0220 00:34:40.518841 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp" podStartSLOduration=3.147576024 podStartE2EDuration="17.51882498s" podCreationTimestamp="2026-02-20 00:34:23 +0000 UTC" firstStartedPulling="2026-02-20 00:34:25.829212786 +0000 UTC m=+1552.197870372" lastFinishedPulling="2026-02-20 00:34:40.200461762 +0000 UTC m=+1566.569119328" observedRunningTime="2026-02-20 00:34:40.481951873 +0000 UTC m=+1566.850609439" watchObservedRunningTime="2026-02-20 00:34:40.51882498 +0000 UTC m=+1566.887482546" Feb 20 00:34:40 crc kubenswrapper[5107]: I0220 00:34:40.519595 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd"] Feb 20 00:34:40 crc kubenswrapper[5107]: I0220 00:34:40.536082 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tsv8l"] Feb 20 00:34:40 crc kubenswrapper[5107]: I0220 00:34:40.553683 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tsv8l"] Feb 20 00:34:41 crc kubenswrapper[5107]: I0220 00:34:41.429945 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd" event={"ID":"0b694ae3-ffed-421b-be35-02328f0a54af","Type":"ContainerStarted","Data":"ecd60f685b67253dd226d8c3b12f7bf482bbe5c80777f28a4739e1de5048487e"} Feb 20 00:34:41 crc kubenswrapper[5107]: I0220 00:34:41.430003 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd" event={"ID":"0b694ae3-ffed-421b-be35-02328f0a54af","Type":"ContainerStarted","Data":"ab02f889382685abd58009b1f5afc8d12d64a4eb2ab58ab488bd77a18f08bf8c"} Feb 20 00:34:41 crc kubenswrapper[5107]: I0220 00:34:41.430025 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd" event={"ID":"0b694ae3-ffed-421b-be35-02328f0a54af","Type":"ContainerStarted","Data":"ff26457b144739016e5d2b0466093de3e024085f485cc68340876f05a3ad28ff"} Feb 20 00:34:41 crc kubenswrapper[5107]: I0220 00:34:41.462618 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd" podStartSLOduration=5.120477609 podStartE2EDuration="5.462542294s" podCreationTimestamp="2026-02-20 00:34:36 +0000 UTC" firstStartedPulling="2026-02-20 00:34:40.500529275 +0000 UTC m=+1566.869186841" lastFinishedPulling="2026-02-20 00:34:40.84259396 +0000 UTC m=+1567.211251526" observedRunningTime="2026-02-20 00:34:41.448608492 +0000 UTC m=+1567.817266058" watchObservedRunningTime="2026-02-20 00:34:41.462542294 +0000 UTC m=+1567.831199860" Feb 20 00:34:41 crc kubenswrapper[5107]: I0220 00:34:41.510118 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Feb 20 00:34:41 crc kubenswrapper[5107]: I0220 00:34:41.570340 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Feb 20 00:34:42 crc kubenswrapper[5107]: I0220 00:34:42.496867 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3625c616-e793-4cdd-ba35-67b1401611a5" path="/var/lib/kubelet/pods/3625c616-e793-4cdd-ba35-67b1401611a5/volumes" Feb 20 00:34:42 crc kubenswrapper[5107]: I0220 00:34:42.508658 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Feb 20 00:34:47 crc kubenswrapper[5107]: I0220 00:34:47.571935 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-89jwh"] Feb 20 00:34:47 crc kubenswrapper[5107]: I0220 00:34:47.574094 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/default-interconnect-55bf8d5cb-89jwh" podUID="548c0b78-a052-4a56-84de-08b3d97c2522" containerName="default-interconnect" containerID="cri-o://9205aaa0c66444f083ac0b0bb8bdf4d3f2e0e5a7fb474f1bd4023064ccfbe13c" gracePeriod=30 Feb 20 00:34:47 crc kubenswrapper[5107]: I0220 00:34:47.991129 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-89jwh" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.021523 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/548c0b78-a052-4a56-84de-08b3d97c2522-sasl-users\") pod \"548c0b78-a052-4a56-84de-08b3d97c2522\" (UID: \"548c0b78-a052-4a56-84de-08b3d97c2522\") " Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.021593 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/548c0b78-a052-4a56-84de-08b3d97c2522-default-interconnect-openstack-ca\") pod \"548c0b78-a052-4a56-84de-08b3d97c2522\" (UID: \"548c0b78-a052-4a56-84de-08b3d97c2522\") " Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.021659 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/548c0b78-a052-4a56-84de-08b3d97c2522-default-interconnect-inter-router-credentials\") pod \"548c0b78-a052-4a56-84de-08b3d97c2522\" (UID: \"548c0b78-a052-4a56-84de-08b3d97c2522\") " Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.021698 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/548c0b78-a052-4a56-84de-08b3d97c2522-default-interconnect-inter-router-ca\") pod \"548c0b78-a052-4a56-84de-08b3d97c2522\" (UID: \"548c0b78-a052-4a56-84de-08b3d97c2522\") " Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.021773 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/548c0b78-a052-4a56-84de-08b3d97c2522-sasl-config\") pod \"548c0b78-a052-4a56-84de-08b3d97c2522\" (UID: \"548c0b78-a052-4a56-84de-08b3d97c2522\") " Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.021842 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9lnm\" (UniqueName: \"kubernetes.io/projected/548c0b78-a052-4a56-84de-08b3d97c2522-kube-api-access-z9lnm\") pod \"548c0b78-a052-4a56-84de-08b3d97c2522\" (UID: \"548c0b78-a052-4a56-84de-08b3d97c2522\") " Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.021866 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/548c0b78-a052-4a56-84de-08b3d97c2522-default-interconnect-openstack-credentials\") pod \"548c0b78-a052-4a56-84de-08b3d97c2522\" (UID: \"548c0b78-a052-4a56-84de-08b3d97c2522\") " Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.025913 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/548c0b78-a052-4a56-84de-08b3d97c2522-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "548c0b78-a052-4a56-84de-08b3d97c2522" (UID: "548c0b78-a052-4a56-84de-08b3d97c2522"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.043386 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/548c0b78-a052-4a56-84de-08b3d97c2522-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "548c0b78-a052-4a56-84de-08b3d97c2522" (UID: "548c0b78-a052-4a56-84de-08b3d97c2522"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.043572 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/548c0b78-a052-4a56-84de-08b3d97c2522-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "548c0b78-a052-4a56-84de-08b3d97c2522" (UID: "548c0b78-a052-4a56-84de-08b3d97c2522"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.046182 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-r9wfk"] Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.049250 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3625c616-e793-4cdd-ba35-67b1401611a5" containerName="extract-content" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.049275 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="3625c616-e793-4cdd-ba35-67b1401611a5" containerName="extract-content" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.049309 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="548c0b78-a052-4a56-84de-08b3d97c2522" containerName="default-interconnect" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.049316 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="548c0b78-a052-4a56-84de-08b3d97c2522" containerName="default-interconnect" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.049352 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3625c616-e793-4cdd-ba35-67b1401611a5" containerName="registry-server" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.049357 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="3625c616-e793-4cdd-ba35-67b1401611a5" containerName="registry-server" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.049370 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3625c616-e793-4cdd-ba35-67b1401611a5" containerName="extract-utilities" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.049377 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="3625c616-e793-4cdd-ba35-67b1401611a5" containerName="extract-utilities" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.049597 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="3625c616-e793-4cdd-ba35-67b1401611a5" containerName="registry-server" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.056121 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="548c0b78-a052-4a56-84de-08b3d97c2522" containerName="default-interconnect" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.051055 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/548c0b78-a052-4a56-84de-08b3d97c2522-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "548c0b78-a052-4a56-84de-08b3d97c2522" (UID: "548c0b78-a052-4a56-84de-08b3d97c2522"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.061993 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-r9wfk" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.064726 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-r9wfk"] Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.072831 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/548c0b78-a052-4a56-84de-08b3d97c2522-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "548c0b78-a052-4a56-84de-08b3d97c2522" (UID: "548c0b78-a052-4a56-84de-08b3d97c2522"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.078397 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/548c0b78-a052-4a56-84de-08b3d97c2522-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "548c0b78-a052-4a56-84de-08b3d97c2522" (UID: "548c0b78-a052-4a56-84de-08b3d97c2522"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.090047 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/548c0b78-a052-4a56-84de-08b3d97c2522-kube-api-access-z9lnm" (OuterVolumeSpecName: "kube-api-access-z9lnm") pod "548c0b78-a052-4a56-84de-08b3d97c2522" (UID: "548c0b78-a052-4a56-84de-08b3d97c2522"). InnerVolumeSpecName "kube-api-access-z9lnm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.123129 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/c6e6d965-40aa-4bdf-9ed1-335e26e5954b-sasl-config\") pod \"default-interconnect-55bf8d5cb-r9wfk\" (UID: \"c6e6d965-40aa-4bdf-9ed1-335e26e5954b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r9wfk" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.123276 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/c6e6d965-40aa-4bdf-9ed1-335e26e5954b-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-r9wfk\" (UID: \"c6e6d965-40aa-4bdf-9ed1-335e26e5954b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r9wfk" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.123316 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/c6e6d965-40aa-4bdf-9ed1-335e26e5954b-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-r9wfk\" (UID: \"c6e6d965-40aa-4bdf-9ed1-335e26e5954b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r9wfk" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.123445 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv4ww\" (UniqueName: \"kubernetes.io/projected/c6e6d965-40aa-4bdf-9ed1-335e26e5954b-kube-api-access-fv4ww\") pod \"default-interconnect-55bf8d5cb-r9wfk\" (UID: \"c6e6d965-40aa-4bdf-9ed1-335e26e5954b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r9wfk" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.123469 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/c6e6d965-40aa-4bdf-9ed1-335e26e5954b-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-r9wfk\" (UID: \"c6e6d965-40aa-4bdf-9ed1-335e26e5954b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r9wfk" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.123537 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/c6e6d965-40aa-4bdf-9ed1-335e26e5954b-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-r9wfk\" (UID: \"c6e6d965-40aa-4bdf-9ed1-335e26e5954b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r9wfk" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.123564 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/c6e6d965-40aa-4bdf-9ed1-335e26e5954b-sasl-users\") pod \"default-interconnect-55bf8d5cb-r9wfk\" (UID: \"c6e6d965-40aa-4bdf-9ed1-335e26e5954b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r9wfk" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.123629 5107 reconciler_common.go:299] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/548c0b78-a052-4a56-84de-08b3d97c2522-sasl-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.123641 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z9lnm\" (UniqueName: \"kubernetes.io/projected/548c0b78-a052-4a56-84de-08b3d97c2522-kube-api-access-z9lnm\") on node \"crc\" DevicePath \"\"" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.123651 5107 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/548c0b78-a052-4a56-84de-08b3d97c2522-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.123660 5107 reconciler_common.go:299] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/548c0b78-a052-4a56-84de-08b3d97c2522-sasl-users\") on node \"crc\" DevicePath \"\"" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.123670 5107 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/548c0b78-a052-4a56-84de-08b3d97c2522-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.123679 5107 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/548c0b78-a052-4a56-84de-08b3d97c2522-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.123689 5107 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/548c0b78-a052-4a56-84de-08b3d97c2522-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.225095 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/c6e6d965-40aa-4bdf-9ed1-335e26e5954b-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-r9wfk\" (UID: \"c6e6d965-40aa-4bdf-9ed1-335e26e5954b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r9wfk" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.225175 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/c6e6d965-40aa-4bdf-9ed1-335e26e5954b-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-r9wfk\" (UID: \"c6e6d965-40aa-4bdf-9ed1-335e26e5954b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r9wfk" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.225200 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/c6e6d965-40aa-4bdf-9ed1-335e26e5954b-sasl-users\") pod \"default-interconnect-55bf8d5cb-r9wfk\" (UID: \"c6e6d965-40aa-4bdf-9ed1-335e26e5954b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r9wfk" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.225366 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/c6e6d965-40aa-4bdf-9ed1-335e26e5954b-sasl-config\") pod \"default-interconnect-55bf8d5cb-r9wfk\" (UID: \"c6e6d965-40aa-4bdf-9ed1-335e26e5954b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r9wfk" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.225459 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/c6e6d965-40aa-4bdf-9ed1-335e26e5954b-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-r9wfk\" (UID: \"c6e6d965-40aa-4bdf-9ed1-335e26e5954b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r9wfk" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.225503 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/c6e6d965-40aa-4bdf-9ed1-335e26e5954b-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-r9wfk\" (UID: \"c6e6d965-40aa-4bdf-9ed1-335e26e5954b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r9wfk" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.225660 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fv4ww\" (UniqueName: \"kubernetes.io/projected/c6e6d965-40aa-4bdf-9ed1-335e26e5954b-kube-api-access-fv4ww\") pod \"default-interconnect-55bf8d5cb-r9wfk\" (UID: \"c6e6d965-40aa-4bdf-9ed1-335e26e5954b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r9wfk" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.226437 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/c6e6d965-40aa-4bdf-9ed1-335e26e5954b-sasl-config\") pod \"default-interconnect-55bf8d5cb-r9wfk\" (UID: \"c6e6d965-40aa-4bdf-9ed1-335e26e5954b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r9wfk" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.229861 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/c6e6d965-40aa-4bdf-9ed1-335e26e5954b-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-r9wfk\" (UID: \"c6e6d965-40aa-4bdf-9ed1-335e26e5954b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r9wfk" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.230660 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/c6e6d965-40aa-4bdf-9ed1-335e26e5954b-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-r9wfk\" (UID: \"c6e6d965-40aa-4bdf-9ed1-335e26e5954b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r9wfk" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.230758 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/c6e6d965-40aa-4bdf-9ed1-335e26e5954b-sasl-users\") pod \"default-interconnect-55bf8d5cb-r9wfk\" (UID: \"c6e6d965-40aa-4bdf-9ed1-335e26e5954b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r9wfk" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.230875 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/c6e6d965-40aa-4bdf-9ed1-335e26e5954b-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-r9wfk\" (UID: \"c6e6d965-40aa-4bdf-9ed1-335e26e5954b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r9wfk" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.230955 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/c6e6d965-40aa-4bdf-9ed1-335e26e5954b-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-r9wfk\" (UID: \"c6e6d965-40aa-4bdf-9ed1-335e26e5954b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r9wfk" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.243755 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv4ww\" (UniqueName: \"kubernetes.io/projected/c6e6d965-40aa-4bdf-9ed1-335e26e5954b-kube-api-access-fv4ww\") pod \"default-interconnect-55bf8d5cb-r9wfk\" (UID: \"c6e6d965-40aa-4bdf-9ed1-335e26e5954b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-r9wfk" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.405372 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-r9wfk" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.482484 5107 generic.go:358] "Generic (PLEG): container finished" podID="33411a6e-af33-4251-9cb8-b6585faf9f3d" containerID="11940db63a687c77fb75d6721254f2feed2be9e634bed57dcbcfa754992b544a" exitCode=0 Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.482560 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2" event={"ID":"33411a6e-af33-4251-9cb8-b6585faf9f3d","Type":"ContainerDied","Data":"11940db63a687c77fb75d6721254f2feed2be9e634bed57dcbcfa754992b544a"} Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.483033 5107 scope.go:117] "RemoveContainer" containerID="11940db63a687c77fb75d6721254f2feed2be9e634bed57dcbcfa754992b544a" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.491258 5107 generic.go:358] "Generic (PLEG): container finished" podID="548c0b78-a052-4a56-84de-08b3d97c2522" containerID="9205aaa0c66444f083ac0b0bb8bdf4d3f2e0e5a7fb474f1bd4023064ccfbe13c" exitCode=0 Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.491383 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-89jwh" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.497075 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-89jwh" event={"ID":"548c0b78-a052-4a56-84de-08b3d97c2522","Type":"ContainerDied","Data":"9205aaa0c66444f083ac0b0bb8bdf4d3f2e0e5a7fb474f1bd4023064ccfbe13c"} Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.497136 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-89jwh" event={"ID":"548c0b78-a052-4a56-84de-08b3d97c2522","Type":"ContainerDied","Data":"6fe6fd7d945670d4e58eb605633469ee55162eea1190424b0dc1733b50a6e4d0"} Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.497212 5107 scope.go:117] "RemoveContainer" containerID="9205aaa0c66444f083ac0b0bb8bdf4d3f2e0e5a7fb474f1bd4023064ccfbe13c" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.510067 5107 generic.go:358] "Generic (PLEG): container finished" podID="9b0e9a7d-426a-4e9c-aa89-e2705b7002c6" containerID="892e23efc4d0e384defbcbecab78ac86513a6a42cb2fc0018400de8cc4ed20fe" exitCode=0 Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.510325 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp" event={"ID":"9b0e9a7d-426a-4e9c-aa89-e2705b7002c6","Type":"ContainerDied","Data":"892e23efc4d0e384defbcbecab78ac86513a6a42cb2fc0018400de8cc4ed20fe"} Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.524670 5107 generic.go:358] "Generic (PLEG): container finished" podID="0b694ae3-ffed-421b-be35-02328f0a54af" containerID="ab02f889382685abd58009b1f5afc8d12d64a4eb2ab58ab488bd77a18f08bf8c" exitCode=0 Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.524821 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd" event={"ID":"0b694ae3-ffed-421b-be35-02328f0a54af","Type":"ContainerDied","Data":"ab02f889382685abd58009b1f5afc8d12d64a4eb2ab58ab488bd77a18f08bf8c"} Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.527026 5107 scope.go:117] "RemoveContainer" containerID="892e23efc4d0e384defbcbecab78ac86513a6a42cb2fc0018400de8cc4ed20fe" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.527555 5107 scope.go:117] "RemoveContainer" containerID="ab02f889382685abd58009b1f5afc8d12d64a4eb2ab58ab488bd77a18f08bf8c" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.558971 5107 generic.go:358] "Generic (PLEG): container finished" podID="edda94c7-f2b0-4139-be2f-13ca1e43f3fe" containerID="d018a6de3617cfe75f1d6c65c1547e753863e910133561a879d3eda3f3a7c283" exitCode=0 Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.559164 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp" event={"ID":"edda94c7-f2b0-4139-be2f-13ca1e43f3fe","Type":"ContainerDied","Data":"d018a6de3617cfe75f1d6c65c1547e753863e910133561a879d3eda3f3a7c283"} Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.560108 5107 scope.go:117] "RemoveContainer" containerID="d018a6de3617cfe75f1d6c65c1547e753863e910133561a879d3eda3f3a7c283" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.575351 5107 scope.go:117] "RemoveContainer" containerID="9205aaa0c66444f083ac0b0bb8bdf4d3f2e0e5a7fb474f1bd4023064ccfbe13c" Feb 20 00:34:48 crc kubenswrapper[5107]: E0220 00:34:48.579102 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9205aaa0c66444f083ac0b0bb8bdf4d3f2e0e5a7fb474f1bd4023064ccfbe13c\": container with ID starting with 9205aaa0c66444f083ac0b0bb8bdf4d3f2e0e5a7fb474f1bd4023064ccfbe13c not found: ID does not exist" containerID="9205aaa0c66444f083ac0b0bb8bdf4d3f2e0e5a7fb474f1bd4023064ccfbe13c" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.579171 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9205aaa0c66444f083ac0b0bb8bdf4d3f2e0e5a7fb474f1bd4023064ccfbe13c"} err="failed to get container status \"9205aaa0c66444f083ac0b0bb8bdf4d3f2e0e5a7fb474f1bd4023064ccfbe13c\": rpc error: code = NotFound desc = could not find container \"9205aaa0c66444f083ac0b0bb8bdf4d3f2e0e5a7fb474f1bd4023064ccfbe13c\": container with ID starting with 9205aaa0c66444f083ac0b0bb8bdf4d3f2e0e5a7fb474f1bd4023064ccfbe13c not found: ID does not exist" Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.640541 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-89jwh"] Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.654071 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-89jwh"] Feb 20 00:34:48 crc kubenswrapper[5107]: I0220 00:34:48.942791 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-r9wfk"] Feb 20 00:34:48 crc kubenswrapper[5107]: W0220 00:34:48.955721 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6e6d965_40aa_4bdf_9ed1_335e26e5954b.slice/crio-d7f74eb0393ba7a1bcefe444793c94ff640a41c15abdf8acd6768848cd7d8d7c WatchSource:0}: Error finding container d7f74eb0393ba7a1bcefe444793c94ff640a41c15abdf8acd6768848cd7d8d7c: Status 404 returned error can't find the container with id d7f74eb0393ba7a1bcefe444793c94ff640a41c15abdf8acd6768848cd7d8d7c Feb 20 00:34:49 crc kubenswrapper[5107]: I0220 00:34:49.567694 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-r9wfk" event={"ID":"c6e6d965-40aa-4bdf-9ed1-335e26e5954b","Type":"ContainerStarted","Data":"44613ded2e2b59fa64f89a109c2779ceed7abe4a5ba6b4f67b0c3ed381bdc919"} Feb 20 00:34:49 crc kubenswrapper[5107]: I0220 00:34:49.568006 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-r9wfk" event={"ID":"c6e6d965-40aa-4bdf-9ed1-335e26e5954b","Type":"ContainerStarted","Data":"d7f74eb0393ba7a1bcefe444793c94ff640a41c15abdf8acd6768848cd7d8d7c"} Feb 20 00:34:49 crc kubenswrapper[5107]: I0220 00:34:49.570291 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd" event={"ID":"0b694ae3-ffed-421b-be35-02328f0a54af","Type":"ContainerStarted","Data":"3e47617c64a9269eb6bc4b4c9ebd568108314001a88d1c58c85859c7e03f34a5"} Feb 20 00:34:49 crc kubenswrapper[5107]: I0220 00:34:49.574305 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp" event={"ID":"edda94c7-f2b0-4139-be2f-13ca1e43f3fe","Type":"ContainerStarted","Data":"4aa9044159a80f6c43f98d3e68598379b4084e498085498533949b9ad09fd68e"} Feb 20 00:34:49 crc kubenswrapper[5107]: I0220 00:34:49.577060 5107 generic.go:358] "Generic (PLEG): container finished" podID="94555f83-7ba4-4142-b52e-e8f11bb77c06" containerID="78b7ec0486a617e5f9bbdcd062ac97110783db82e2317d5f0fb38f00aa09252d" exitCode=0 Feb 20 00:34:49 crc kubenswrapper[5107]: I0220 00:34:49.577132 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz" event={"ID":"94555f83-7ba4-4142-b52e-e8f11bb77c06","Type":"ContainerDied","Data":"78b7ec0486a617e5f9bbdcd062ac97110783db82e2317d5f0fb38f00aa09252d"} Feb 20 00:34:49 crc kubenswrapper[5107]: I0220 00:34:49.577468 5107 scope.go:117] "RemoveContainer" containerID="78b7ec0486a617e5f9bbdcd062ac97110783db82e2317d5f0fb38f00aa09252d" Feb 20 00:34:49 crc kubenswrapper[5107]: I0220 00:34:49.581884 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2" event={"ID":"33411a6e-af33-4251-9cb8-b6585faf9f3d","Type":"ContainerStarted","Data":"523465b41ee966e0221aa5f7ee237e008b87fe1d63cbbf2ddee6629967964b74"} Feb 20 00:34:49 crc kubenswrapper[5107]: I0220 00:34:49.585833 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp" event={"ID":"9b0e9a7d-426a-4e9c-aa89-e2705b7002c6","Type":"ContainerStarted","Data":"5f81afa6107c2a36ef252b158d81566bc980e65f21a94ab2b6f229c3e9b7a651"} Feb 20 00:34:49 crc kubenswrapper[5107]: I0220 00:34:49.592249 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-r9wfk" podStartSLOduration=2.592236895 podStartE2EDuration="2.592236895s" podCreationTimestamp="2026-02-20 00:34:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 00:34:49.586104242 +0000 UTC m=+1575.954761808" watchObservedRunningTime="2026-02-20 00:34:49.592236895 +0000 UTC m=+1575.960894461" Feb 20 00:34:50 crc kubenswrapper[5107]: I0220 00:34:50.499756 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="548c0b78-a052-4a56-84de-08b3d97c2522" path="/var/lib/kubelet/pods/548c0b78-a052-4a56-84de-08b3d97c2522/volumes" Feb 20 00:34:50 crc kubenswrapper[5107]: I0220 00:34:50.596235 5107 generic.go:358] "Generic (PLEG): container finished" podID="33411a6e-af33-4251-9cb8-b6585faf9f3d" containerID="523465b41ee966e0221aa5f7ee237e008b87fe1d63cbbf2ddee6629967964b74" exitCode=0 Feb 20 00:34:50 crc kubenswrapper[5107]: I0220 00:34:50.596372 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2" event={"ID":"33411a6e-af33-4251-9cb8-b6585faf9f3d","Type":"ContainerDied","Data":"523465b41ee966e0221aa5f7ee237e008b87fe1d63cbbf2ddee6629967964b74"} Feb 20 00:34:50 crc kubenswrapper[5107]: I0220 00:34:50.596419 5107 scope.go:117] "RemoveContainer" containerID="11940db63a687c77fb75d6721254f2feed2be9e634bed57dcbcfa754992b544a" Feb 20 00:34:50 crc kubenswrapper[5107]: I0220 00:34:50.597055 5107 scope.go:117] "RemoveContainer" containerID="523465b41ee966e0221aa5f7ee237e008b87fe1d63cbbf2ddee6629967964b74" Feb 20 00:34:50 crc kubenswrapper[5107]: E0220 00:34:50.597472 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2_service-telemetry(33411a6e-af33-4251-9cb8-b6585faf9f3d)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2" podUID="33411a6e-af33-4251-9cb8-b6585faf9f3d" Feb 20 00:34:50 crc kubenswrapper[5107]: I0220 00:34:50.601922 5107 generic.go:358] "Generic (PLEG): container finished" podID="9b0e9a7d-426a-4e9c-aa89-e2705b7002c6" containerID="5f81afa6107c2a36ef252b158d81566bc980e65f21a94ab2b6f229c3e9b7a651" exitCode=0 Feb 20 00:34:50 crc kubenswrapper[5107]: I0220 00:34:50.602008 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp" event={"ID":"9b0e9a7d-426a-4e9c-aa89-e2705b7002c6","Type":"ContainerDied","Data":"5f81afa6107c2a36ef252b158d81566bc980e65f21a94ab2b6f229c3e9b7a651"} Feb 20 00:34:50 crc kubenswrapper[5107]: I0220 00:34:50.602726 5107 scope.go:117] "RemoveContainer" containerID="5f81afa6107c2a36ef252b158d81566bc980e65f21a94ab2b6f229c3e9b7a651" Feb 20 00:34:50 crc kubenswrapper[5107]: E0220 00:34:50.603081 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp_service-telemetry(9b0e9a7d-426a-4e9c-aa89-e2705b7002c6)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp" podUID="9b0e9a7d-426a-4e9c-aa89-e2705b7002c6" Feb 20 00:34:50 crc kubenswrapper[5107]: I0220 00:34:50.612410 5107 generic.go:358] "Generic (PLEG): container finished" podID="0b694ae3-ffed-421b-be35-02328f0a54af" containerID="3e47617c64a9269eb6bc4b4c9ebd568108314001a88d1c58c85859c7e03f34a5" exitCode=0 Feb 20 00:34:50 crc kubenswrapper[5107]: I0220 00:34:50.612696 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd" event={"ID":"0b694ae3-ffed-421b-be35-02328f0a54af","Type":"ContainerDied","Data":"3e47617c64a9269eb6bc4b4c9ebd568108314001a88d1c58c85859c7e03f34a5"} Feb 20 00:34:50 crc kubenswrapper[5107]: I0220 00:34:50.613367 5107 scope.go:117] "RemoveContainer" containerID="3e47617c64a9269eb6bc4b4c9ebd568108314001a88d1c58c85859c7e03f34a5" Feb 20 00:34:50 crc kubenswrapper[5107]: E0220 00:34:50.613691 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd_service-telemetry(0b694ae3-ffed-421b-be35-02328f0a54af)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd" podUID="0b694ae3-ffed-421b-be35-02328f0a54af" Feb 20 00:34:50 crc kubenswrapper[5107]: I0220 00:34:50.620523 5107 generic.go:358] "Generic (PLEG): container finished" podID="edda94c7-f2b0-4139-be2f-13ca1e43f3fe" containerID="4aa9044159a80f6c43f98d3e68598379b4084e498085498533949b9ad09fd68e" exitCode=0 Feb 20 00:34:50 crc kubenswrapper[5107]: I0220 00:34:50.620680 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp" event={"ID":"edda94c7-f2b0-4139-be2f-13ca1e43f3fe","Type":"ContainerDied","Data":"4aa9044159a80f6c43f98d3e68598379b4084e498085498533949b9ad09fd68e"} Feb 20 00:34:50 crc kubenswrapper[5107]: I0220 00:34:50.621388 5107 scope.go:117] "RemoveContainer" containerID="4aa9044159a80f6c43f98d3e68598379b4084e498085498533949b9ad09fd68e" Feb 20 00:34:50 crc kubenswrapper[5107]: E0220 00:34:50.621707 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp_service-telemetry(edda94c7-f2b0-4139-be2f-13ca1e43f3fe)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp" podUID="edda94c7-f2b0-4139-be2f-13ca1e43f3fe" Feb 20 00:34:50 crc kubenswrapper[5107]: I0220 00:34:50.631308 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz" event={"ID":"94555f83-7ba4-4142-b52e-e8f11bb77c06","Type":"ContainerStarted","Data":"ab323aa545729fc4216cfc144da3192a2a8e6c89eb335e09d5e9e2614f8960c0"} Feb 20 00:34:50 crc kubenswrapper[5107]: I0220 00:34:50.648767 5107 scope.go:117] "RemoveContainer" containerID="892e23efc4d0e384defbcbecab78ac86513a6a42cb2fc0018400de8cc4ed20fe" Feb 20 00:34:50 crc kubenswrapper[5107]: I0220 00:34:50.713316 5107 scope.go:117] "RemoveContainer" containerID="ab02f889382685abd58009b1f5afc8d12d64a4eb2ab58ab488bd77a18f08bf8c" Feb 20 00:34:50 crc kubenswrapper[5107]: I0220 00:34:50.804665 5107 scope.go:117] "RemoveContainer" containerID="d018a6de3617cfe75f1d6c65c1547e753863e910133561a879d3eda3f3a7c283" Feb 20 00:34:51 crc kubenswrapper[5107]: I0220 00:34:51.646471 5107 generic.go:358] "Generic (PLEG): container finished" podID="94555f83-7ba4-4142-b52e-e8f11bb77c06" containerID="ab323aa545729fc4216cfc144da3192a2a8e6c89eb335e09d5e9e2614f8960c0" exitCode=0 Feb 20 00:34:51 crc kubenswrapper[5107]: I0220 00:34:51.646603 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz" event={"ID":"94555f83-7ba4-4142-b52e-e8f11bb77c06","Type":"ContainerDied","Data":"ab323aa545729fc4216cfc144da3192a2a8e6c89eb335e09d5e9e2614f8960c0"} Feb 20 00:34:51 crc kubenswrapper[5107]: I0220 00:34:51.646654 5107 scope.go:117] "RemoveContainer" containerID="78b7ec0486a617e5f9bbdcd062ac97110783db82e2317d5f0fb38f00aa09252d" Feb 20 00:34:51 crc kubenswrapper[5107]: I0220 00:34:51.647369 5107 scope.go:117] "RemoveContainer" containerID="ab323aa545729fc4216cfc144da3192a2a8e6c89eb335e09d5e9e2614f8960c0" Feb 20 00:34:51 crc kubenswrapper[5107]: E0220 00:34:51.647801 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz_service-telemetry(94555f83-7ba4-4142-b52e-e8f11bb77c06)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz" podUID="94555f83-7ba4-4142-b52e-e8f11bb77c06" Feb 20 00:34:53 crc kubenswrapper[5107]: I0220 00:34:53.039305 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Feb 20 00:34:53 crc kubenswrapper[5107]: I0220 00:34:53.048204 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Feb 20 00:34:53 crc kubenswrapper[5107]: I0220 00:34:53.053908 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"qdr-test-config\"" Feb 20 00:34:53 crc kubenswrapper[5107]: I0220 00:34:53.054734 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-selfsigned\"" Feb 20 00:34:53 crc kubenswrapper[5107]: I0220 00:34:53.062640 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Feb 20 00:34:53 crc kubenswrapper[5107]: I0220 00:34:53.111511 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwtsd\" (UniqueName: \"kubernetes.io/projected/dbc34c44-90b7-4482-9d55-9fd821d7f6a6-kube-api-access-cwtsd\") pod \"qdr-test\" (UID: \"dbc34c44-90b7-4482-9d55-9fd821d7f6a6\") " pod="service-telemetry/qdr-test" Feb 20 00:34:53 crc kubenswrapper[5107]: I0220 00:34:53.111684 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/dbc34c44-90b7-4482-9d55-9fd821d7f6a6-qdr-test-config\") pod \"qdr-test\" (UID: \"dbc34c44-90b7-4482-9d55-9fd821d7f6a6\") " pod="service-telemetry/qdr-test" Feb 20 00:34:53 crc kubenswrapper[5107]: I0220 00:34:53.111742 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/dbc34c44-90b7-4482-9d55-9fd821d7f6a6-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"dbc34c44-90b7-4482-9d55-9fd821d7f6a6\") " pod="service-telemetry/qdr-test" Feb 20 00:34:53 crc kubenswrapper[5107]: I0220 00:34:53.212817 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cwtsd\" (UniqueName: \"kubernetes.io/projected/dbc34c44-90b7-4482-9d55-9fd821d7f6a6-kube-api-access-cwtsd\") pod \"qdr-test\" (UID: \"dbc34c44-90b7-4482-9d55-9fd821d7f6a6\") " pod="service-telemetry/qdr-test" Feb 20 00:34:53 crc kubenswrapper[5107]: I0220 00:34:53.212926 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/dbc34c44-90b7-4482-9d55-9fd821d7f6a6-qdr-test-config\") pod \"qdr-test\" (UID: \"dbc34c44-90b7-4482-9d55-9fd821d7f6a6\") " pod="service-telemetry/qdr-test" Feb 20 00:34:53 crc kubenswrapper[5107]: I0220 00:34:53.212966 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/dbc34c44-90b7-4482-9d55-9fd821d7f6a6-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"dbc34c44-90b7-4482-9d55-9fd821d7f6a6\") " pod="service-telemetry/qdr-test" Feb 20 00:34:53 crc kubenswrapper[5107]: I0220 00:34:53.214440 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/dbc34c44-90b7-4482-9d55-9fd821d7f6a6-qdr-test-config\") pod \"qdr-test\" (UID: \"dbc34c44-90b7-4482-9d55-9fd821d7f6a6\") " pod="service-telemetry/qdr-test" Feb 20 00:34:53 crc kubenswrapper[5107]: I0220 00:34:53.220883 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/dbc34c44-90b7-4482-9d55-9fd821d7f6a6-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"dbc34c44-90b7-4482-9d55-9fd821d7f6a6\") " pod="service-telemetry/qdr-test" Feb 20 00:34:53 crc kubenswrapper[5107]: I0220 00:34:53.234013 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwtsd\" (UniqueName: \"kubernetes.io/projected/dbc34c44-90b7-4482-9d55-9fd821d7f6a6-kube-api-access-cwtsd\") pod \"qdr-test\" (UID: \"dbc34c44-90b7-4482-9d55-9fd821d7f6a6\") " pod="service-telemetry/qdr-test" Feb 20 00:34:53 crc kubenswrapper[5107]: I0220 00:34:53.363539 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Feb 20 00:34:53 crc kubenswrapper[5107]: I0220 00:34:53.616010 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Feb 20 00:34:53 crc kubenswrapper[5107]: I0220 00:34:53.692771 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"dbc34c44-90b7-4482-9d55-9fd821d7f6a6","Type":"ContainerStarted","Data":"c43ae46e07755efd96113a335e0cb46c269baee0a6ad77dee744c0e1f6612737"} Feb 20 00:35:01 crc kubenswrapper[5107]: I0220 00:35:01.487155 5107 scope.go:117] "RemoveContainer" containerID="523465b41ee966e0221aa5f7ee237e008b87fe1d63cbbf2ddee6629967964b74" Feb 20 00:35:01 crc kubenswrapper[5107]: I0220 00:35:01.488975 5107 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 20 00:35:02 crc kubenswrapper[5107]: I0220 00:35:02.797261 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2" event={"ID":"33411a6e-af33-4251-9cb8-b6585faf9f3d","Type":"ContainerStarted","Data":"b15db76c556e99c833aa14f4b71dd98ca40051ade4bce793b0180e20e3663ff1"} Feb 20 00:35:02 crc kubenswrapper[5107]: I0220 00:35:02.800269 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"dbc34c44-90b7-4482-9d55-9fd821d7f6a6","Type":"ContainerStarted","Data":"9b761d7df597a0378cb048a1c7125ffa9aeaa3fe1e8b1e081cbff8c1bd217c6e"} Feb 20 00:35:02 crc kubenswrapper[5107]: I0220 00:35:02.859899 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=1.802808218 podStartE2EDuration="9.859881594s" podCreationTimestamp="2026-02-20 00:34:53 +0000 UTC" firstStartedPulling="2026-02-20 00:34:53.620967984 +0000 UTC m=+1579.989625550" lastFinishedPulling="2026-02-20 00:35:01.67804135 +0000 UTC m=+1588.046698926" observedRunningTime="2026-02-20 00:35:02.856617582 +0000 UTC m=+1589.225275168" watchObservedRunningTime="2026-02-20 00:35:02.859881594 +0000 UTC m=+1589.228539170" Feb 20 00:35:03 crc kubenswrapper[5107]: I0220 00:35:03.095822 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-slnss"] Feb 20 00:35:03 crc kubenswrapper[5107]: I0220 00:35:03.103985 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-slnss" Feb 20 00:35:03 crc kubenswrapper[5107]: I0220 00:35:03.104316 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-slnss"] Feb 20 00:35:03 crc kubenswrapper[5107]: I0220 00:35:03.106679 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-config\"" Feb 20 00:35:03 crc kubenswrapper[5107]: I0220 00:35:03.106727 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-publisher\"" Feb 20 00:35:03 crc kubenswrapper[5107]: I0220 00:35:03.106951 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-healthcheck-log\"" Feb 20 00:35:03 crc kubenswrapper[5107]: I0220 00:35:03.106998 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-sensubility-config\"" Feb 20 00:35:03 crc kubenswrapper[5107]: I0220 00:35:03.108471 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-entrypoint-script\"" Feb 20 00:35:03 crc kubenswrapper[5107]: I0220 00:35:03.108854 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-entrypoint-script\"" Feb 20 00:35:03 crc kubenswrapper[5107]: I0220 00:35:03.260857 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-sensubility-config\") pod \"stf-smoketest-smoke1-slnss\" (UID: \"34b3a2ba-2127-47ee-bf0a-6c7529b6143a\") " pod="service-telemetry/stf-smoketest-smoke1-slnss" Feb 20 00:35:03 crc kubenswrapper[5107]: I0220 00:35:03.260895 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-healthcheck-log\") pod \"stf-smoketest-smoke1-slnss\" (UID: \"34b3a2ba-2127-47ee-bf0a-6c7529b6143a\") " pod="service-telemetry/stf-smoketest-smoke1-slnss" Feb 20 00:35:03 crc kubenswrapper[5107]: I0220 00:35:03.260918 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-collectd-config\") pod \"stf-smoketest-smoke1-slnss\" (UID: \"34b3a2ba-2127-47ee-bf0a-6c7529b6143a\") " pod="service-telemetry/stf-smoketest-smoke1-slnss" Feb 20 00:35:03 crc kubenswrapper[5107]: I0220 00:35:03.260967 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-slnss\" (UID: \"34b3a2ba-2127-47ee-bf0a-6c7529b6143a\") " pod="service-telemetry/stf-smoketest-smoke1-slnss" Feb 20 00:35:03 crc kubenswrapper[5107]: I0220 00:35:03.260995 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-ceilometer-publisher\") pod \"stf-smoketest-smoke1-slnss\" (UID: \"34b3a2ba-2127-47ee-bf0a-6c7529b6143a\") " pod="service-telemetry/stf-smoketest-smoke1-slnss" Feb 20 00:35:03 crc kubenswrapper[5107]: I0220 00:35:03.261032 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-slnss\" (UID: \"34b3a2ba-2127-47ee-bf0a-6c7529b6143a\") " pod="service-telemetry/stf-smoketest-smoke1-slnss" Feb 20 00:35:03 crc kubenswrapper[5107]: I0220 00:35:03.261073 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hk5q\" (UniqueName: \"kubernetes.io/projected/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-kube-api-access-9hk5q\") pod \"stf-smoketest-smoke1-slnss\" (UID: \"34b3a2ba-2127-47ee-bf0a-6c7529b6143a\") " pod="service-telemetry/stf-smoketest-smoke1-slnss" Feb 20 00:35:03 crc kubenswrapper[5107]: I0220 00:35:03.362751 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-slnss\" (UID: \"34b3a2ba-2127-47ee-bf0a-6c7529b6143a\") " pod="service-telemetry/stf-smoketest-smoke1-slnss" Feb 20 00:35:03 crc kubenswrapper[5107]: I0220 00:35:03.362818 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-ceilometer-publisher\") pod \"stf-smoketest-smoke1-slnss\" (UID: \"34b3a2ba-2127-47ee-bf0a-6c7529b6143a\") " pod="service-telemetry/stf-smoketest-smoke1-slnss" Feb 20 00:35:03 crc kubenswrapper[5107]: I0220 00:35:03.362874 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-slnss\" (UID: \"34b3a2ba-2127-47ee-bf0a-6c7529b6143a\") " pod="service-telemetry/stf-smoketest-smoke1-slnss" Feb 20 00:35:03 crc kubenswrapper[5107]: I0220 00:35:03.362935 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9hk5q\" (UniqueName: \"kubernetes.io/projected/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-kube-api-access-9hk5q\") pod \"stf-smoketest-smoke1-slnss\" (UID: \"34b3a2ba-2127-47ee-bf0a-6c7529b6143a\") " pod="service-telemetry/stf-smoketest-smoke1-slnss" Feb 20 00:35:03 crc kubenswrapper[5107]: I0220 00:35:03.362972 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-sensubility-config\") pod \"stf-smoketest-smoke1-slnss\" (UID: \"34b3a2ba-2127-47ee-bf0a-6c7529b6143a\") " pod="service-telemetry/stf-smoketest-smoke1-slnss" Feb 20 00:35:03 crc kubenswrapper[5107]: I0220 00:35:03.363031 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-healthcheck-log\") pod \"stf-smoketest-smoke1-slnss\" (UID: \"34b3a2ba-2127-47ee-bf0a-6c7529b6143a\") " pod="service-telemetry/stf-smoketest-smoke1-slnss" Feb 20 00:35:03 crc kubenswrapper[5107]: I0220 00:35:03.363067 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-collectd-config\") pod \"stf-smoketest-smoke1-slnss\" (UID: \"34b3a2ba-2127-47ee-bf0a-6c7529b6143a\") " pod="service-telemetry/stf-smoketest-smoke1-slnss" Feb 20 00:35:03 crc kubenswrapper[5107]: I0220 00:35:03.364087 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-collectd-config\") pod \"stf-smoketest-smoke1-slnss\" (UID: \"34b3a2ba-2127-47ee-bf0a-6c7529b6143a\") " pod="service-telemetry/stf-smoketest-smoke1-slnss" Feb 20 00:35:03 crc kubenswrapper[5107]: I0220 00:35:03.364817 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-slnss\" (UID: \"34b3a2ba-2127-47ee-bf0a-6c7529b6143a\") " pod="service-telemetry/stf-smoketest-smoke1-slnss" Feb 20 00:35:03 crc kubenswrapper[5107]: I0220 00:35:03.365781 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-ceilometer-publisher\") pod \"stf-smoketest-smoke1-slnss\" (UID: \"34b3a2ba-2127-47ee-bf0a-6c7529b6143a\") " pod="service-telemetry/stf-smoketest-smoke1-slnss" Feb 20 00:35:03 crc kubenswrapper[5107]: I0220 00:35:03.367808 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-sensubility-config\") pod \"stf-smoketest-smoke1-slnss\" (UID: \"34b3a2ba-2127-47ee-bf0a-6c7529b6143a\") " pod="service-telemetry/stf-smoketest-smoke1-slnss" Feb 20 00:35:03 crc kubenswrapper[5107]: I0220 00:35:03.368002 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-healthcheck-log\") pod \"stf-smoketest-smoke1-slnss\" (UID: \"34b3a2ba-2127-47ee-bf0a-6c7529b6143a\") " pod="service-telemetry/stf-smoketest-smoke1-slnss" Feb 20 00:35:03 crc kubenswrapper[5107]: I0220 00:35:03.368363 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-slnss\" (UID: \"34b3a2ba-2127-47ee-bf0a-6c7529b6143a\") " pod="service-telemetry/stf-smoketest-smoke1-slnss" Feb 20 00:35:03 crc kubenswrapper[5107]: I0220 00:35:03.388815 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Feb 20 00:35:03 crc kubenswrapper[5107]: I0220 00:35:03.395699 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 20 00:35:03 crc kubenswrapper[5107]: I0220 00:35:03.399943 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Feb 20 00:35:03 crc kubenswrapper[5107]: I0220 00:35:03.416589 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hk5q\" (UniqueName: \"kubernetes.io/projected/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-kube-api-access-9hk5q\") pod \"stf-smoketest-smoke1-slnss\" (UID: \"34b3a2ba-2127-47ee-bf0a-6c7529b6143a\") " pod="service-telemetry/stf-smoketest-smoke1-slnss" Feb 20 00:35:03 crc kubenswrapper[5107]: I0220 00:35:03.431412 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-slnss" Feb 20 00:35:03 crc kubenswrapper[5107]: I0220 00:35:03.565709 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlpw9\" (UniqueName: \"kubernetes.io/projected/c2a229e0-2d31-4dbb-991c-8ff5afc139f1-kube-api-access-mlpw9\") pod \"curl\" (UID: \"c2a229e0-2d31-4dbb-991c-8ff5afc139f1\") " pod="service-telemetry/curl" Feb 20 00:35:03 crc kubenswrapper[5107]: I0220 00:35:03.667511 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mlpw9\" (UniqueName: \"kubernetes.io/projected/c2a229e0-2d31-4dbb-991c-8ff5afc139f1-kube-api-access-mlpw9\") pod \"curl\" (UID: \"c2a229e0-2d31-4dbb-991c-8ff5afc139f1\") " pod="service-telemetry/curl" Feb 20 00:35:03 crc kubenswrapper[5107]: I0220 00:35:03.685135 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlpw9\" (UniqueName: \"kubernetes.io/projected/c2a229e0-2d31-4dbb-991c-8ff5afc139f1-kube-api-access-mlpw9\") pod \"curl\" (UID: \"c2a229e0-2d31-4dbb-991c-8ff5afc139f1\") " pod="service-telemetry/curl" Feb 20 00:35:03 crc kubenswrapper[5107]: I0220 00:35:03.801657 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 20 00:35:03 crc kubenswrapper[5107]: I0220 00:35:03.845516 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-slnss"] Feb 20 00:35:04 crc kubenswrapper[5107]: I0220 00:35:04.242300 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Feb 20 00:35:04 crc kubenswrapper[5107]: W0220 00:35:04.246921 5107 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2a229e0_2d31_4dbb_991c_8ff5afc139f1.slice/crio-e80c47baf0b3c9f19a7cfdc21ef1240683aa2d4de5e196e85cfb134fb36461a7 WatchSource:0}: Error finding container e80c47baf0b3c9f19a7cfdc21ef1240683aa2d4de5e196e85cfb134fb36461a7: Status 404 returned error can't find the container with id e80c47baf0b3c9f19a7cfdc21ef1240683aa2d4de5e196e85cfb134fb36461a7 Feb 20 00:35:04 crc kubenswrapper[5107]: I0220 00:35:04.493255 5107 scope.go:117] "RemoveContainer" containerID="5f81afa6107c2a36ef252b158d81566bc980e65f21a94ab2b6f229c3e9b7a651" Feb 20 00:35:04 crc kubenswrapper[5107]: I0220 00:35:04.493300 5107 scope.go:117] "RemoveContainer" containerID="4aa9044159a80f6c43f98d3e68598379b4084e498085498533949b9ad09fd68e" Feb 20 00:35:04 crc kubenswrapper[5107]: I0220 00:35:04.827897 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp" event={"ID":"9b0e9a7d-426a-4e9c-aa89-e2705b7002c6","Type":"ContainerStarted","Data":"d814c09129b9dee1c086a76c0b5e8fe61cffcdfbe0decab49966150f64ac568c"} Feb 20 00:35:04 crc kubenswrapper[5107]: I0220 00:35:04.830192 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-slnss" event={"ID":"34b3a2ba-2127-47ee-bf0a-6c7529b6143a","Type":"ContainerStarted","Data":"1545ee6096bec821b29013780daf79df65d00eaa0e78c2faeadc2c2615664a3b"} Feb 20 00:35:04 crc kubenswrapper[5107]: I0220 00:35:04.843053 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"c2a229e0-2d31-4dbb-991c-8ff5afc139f1","Type":"ContainerStarted","Data":"e80c47baf0b3c9f19a7cfdc21ef1240683aa2d4de5e196e85cfb134fb36461a7"} Feb 20 00:35:04 crc kubenswrapper[5107]: I0220 00:35:04.847306 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp" event={"ID":"edda94c7-f2b0-4139-be2f-13ca1e43f3fe","Type":"ContainerStarted","Data":"b2c2e479d4d46527d1127a15b2c0c59a8ff2843db4c18f45c256da21affb4f62"} Feb 20 00:35:05 crc kubenswrapper[5107]: I0220 00:35:05.486118 5107 scope.go:117] "RemoveContainer" containerID="3e47617c64a9269eb6bc4b4c9ebd568108314001a88d1c58c85859c7e03f34a5" Feb 20 00:35:05 crc kubenswrapper[5107]: I0220 00:35:05.486544 5107 scope.go:117] "RemoveContainer" containerID="ab323aa545729fc4216cfc144da3192a2a8e6c89eb335e09d5e9e2614f8960c0" Feb 20 00:35:06 crc kubenswrapper[5107]: I0220 00:35:06.865909 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd" event={"ID":"0b694ae3-ffed-421b-be35-02328f0a54af","Type":"ContainerStarted","Data":"cb52ca762bcaad2a05b1ae81bb0a427194e396b2389200c76a5b1f248bbc19e0"} Feb 20 00:35:06 crc kubenswrapper[5107]: I0220 00:35:06.869865 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz" event={"ID":"94555f83-7ba4-4142-b52e-e8f11bb77c06","Type":"ContainerStarted","Data":"3ffc2dd2c41d5a2b7e203e4791c34f9b21ec4c61b7cf9c5eef5356ae1ee4f98e"} Feb 20 00:35:06 crc kubenswrapper[5107]: I0220 00:35:06.872106 5107 generic.go:358] "Generic (PLEG): container finished" podID="c2a229e0-2d31-4dbb-991c-8ff5afc139f1" containerID="e4db931ad109d71b1dd68bf5672ee3d48b310f3774d3208d30f45c3865cc9a2b" exitCode=0 Feb 20 00:35:06 crc kubenswrapper[5107]: I0220 00:35:06.872162 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"c2a229e0-2d31-4dbb-991c-8ff5afc139f1","Type":"ContainerDied","Data":"e4db931ad109d71b1dd68bf5672ee3d48b310f3774d3208d30f45c3865cc9a2b"} Feb 20 00:35:10 crc kubenswrapper[5107]: I0220 00:35:10.091006 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 20 00:35:10 crc kubenswrapper[5107]: I0220 00:35:10.174202 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlpw9\" (UniqueName: \"kubernetes.io/projected/c2a229e0-2d31-4dbb-991c-8ff5afc139f1-kube-api-access-mlpw9\") pod \"c2a229e0-2d31-4dbb-991c-8ff5afc139f1\" (UID: \"c2a229e0-2d31-4dbb-991c-8ff5afc139f1\") " Feb 20 00:35:10 crc kubenswrapper[5107]: I0220 00:35:10.185050 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2a229e0-2d31-4dbb-991c-8ff5afc139f1-kube-api-access-mlpw9" (OuterVolumeSpecName: "kube-api-access-mlpw9") pod "c2a229e0-2d31-4dbb-991c-8ff5afc139f1" (UID: "c2a229e0-2d31-4dbb-991c-8ff5afc139f1"). InnerVolumeSpecName "kube-api-access-mlpw9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:35:10 crc kubenswrapper[5107]: I0220 00:35:10.268866 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_c2a229e0-2d31-4dbb-991c-8ff5afc139f1/curl/0.log" Feb 20 00:35:10 crc kubenswrapper[5107]: I0220 00:35:10.275672 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mlpw9\" (UniqueName: \"kubernetes.io/projected/c2a229e0-2d31-4dbb-991c-8ff5afc139f1-kube-api-access-mlpw9\") on node \"crc\" DevicePath \"\"" Feb 20 00:35:10 crc kubenswrapper[5107]: I0220 00:35:10.543394 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-694dc457d5-ppgbd_60edd466-2f48-492e-9c1b-a58f4a8882a4/prometheus-webhook-snmp/0.log" Feb 20 00:35:10 crc kubenswrapper[5107]: I0220 00:35:10.901255 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 20 00:35:10 crc kubenswrapper[5107]: I0220 00:35:10.901292 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"c2a229e0-2d31-4dbb-991c-8ff5afc139f1","Type":"ContainerDied","Data":"e80c47baf0b3c9f19a7cfdc21ef1240683aa2d4de5e196e85cfb134fb36461a7"} Feb 20 00:35:10 crc kubenswrapper[5107]: I0220 00:35:10.901338 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e80c47baf0b3c9f19a7cfdc21ef1240683aa2d4de5e196e85cfb134fb36461a7" Feb 20 00:35:14 crc kubenswrapper[5107]: I0220 00:35:14.936388 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-slnss" event={"ID":"34b3a2ba-2127-47ee-bf0a-6c7529b6143a","Type":"ContainerStarted","Data":"6706ef859d9a6030b13030e317e21845d303e2f8bb5d400daec185429cad0d38"} Feb 20 00:35:20 crc kubenswrapper[5107]: I0220 00:35:20.996754 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-slnss" event={"ID":"34b3a2ba-2127-47ee-bf0a-6c7529b6143a","Type":"ContainerStarted","Data":"74b271501e30cad0ead5f3622a23ddb2247921bba1a6b3d08f1fd83ed4a08017"} Feb 20 00:35:21 crc kubenswrapper[5107]: I0220 00:35:21.018372 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-slnss" podStartSLOduration=1.669011298 podStartE2EDuration="18.018349569s" podCreationTimestamp="2026-02-20 00:35:03 +0000 UTC" firstStartedPulling="2026-02-20 00:35:03.859542702 +0000 UTC m=+1590.228200268" lastFinishedPulling="2026-02-20 00:35:20.208880963 +0000 UTC m=+1606.577538539" observedRunningTime="2026-02-20 00:35:21.016174578 +0000 UTC m=+1607.384832184" watchObservedRunningTime="2026-02-20 00:35:21.018349569 +0000 UTC m=+1607.387007175" Feb 20 00:35:40 crc kubenswrapper[5107]: I0220 00:35:40.749743 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-694dc457d5-ppgbd_60edd466-2f48-492e-9c1b-a58f4a8882a4/prometheus-webhook-snmp/0.log" Feb 20 00:35:49 crc kubenswrapper[5107]: I0220 00:35:49.301522 5107 generic.go:358] "Generic (PLEG): container finished" podID="34b3a2ba-2127-47ee-bf0a-6c7529b6143a" containerID="6706ef859d9a6030b13030e317e21845d303e2f8bb5d400daec185429cad0d38" exitCode=0 Feb 20 00:35:49 crc kubenswrapper[5107]: I0220 00:35:49.301625 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-slnss" event={"ID":"34b3a2ba-2127-47ee-bf0a-6c7529b6143a","Type":"ContainerDied","Data":"6706ef859d9a6030b13030e317e21845d303e2f8bb5d400daec185429cad0d38"} Feb 20 00:35:49 crc kubenswrapper[5107]: I0220 00:35:49.302752 5107 scope.go:117] "RemoveContainer" containerID="6706ef859d9a6030b13030e317e21845d303e2f8bb5d400daec185429cad0d38" Feb 20 00:35:52 crc kubenswrapper[5107]: I0220 00:35:52.335317 5107 generic.go:358] "Generic (PLEG): container finished" podID="34b3a2ba-2127-47ee-bf0a-6c7529b6143a" containerID="74b271501e30cad0ead5f3622a23ddb2247921bba1a6b3d08f1fd83ed4a08017" exitCode=0 Feb 20 00:35:52 crc kubenswrapper[5107]: I0220 00:35:52.335429 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-slnss" event={"ID":"34b3a2ba-2127-47ee-bf0a-6c7529b6143a","Type":"ContainerDied","Data":"74b271501e30cad0ead5f3622a23ddb2247921bba1a6b3d08f1fd83ed4a08017"} Feb 20 00:35:53 crc kubenswrapper[5107]: I0220 00:35:53.696939 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-slnss" Feb 20 00:35:53 crc kubenswrapper[5107]: I0220 00:35:53.795609 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-sensubility-config\") pod \"34b3a2ba-2127-47ee-bf0a-6c7529b6143a\" (UID: \"34b3a2ba-2127-47ee-bf0a-6c7529b6143a\") " Feb 20 00:35:53 crc kubenswrapper[5107]: I0220 00:35:53.795820 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-ceilometer-publisher\") pod \"34b3a2ba-2127-47ee-bf0a-6c7529b6143a\" (UID: \"34b3a2ba-2127-47ee-bf0a-6c7529b6143a\") " Feb 20 00:35:53 crc kubenswrapper[5107]: I0220 00:35:53.796122 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-collectd-config\") pod \"34b3a2ba-2127-47ee-bf0a-6c7529b6143a\" (UID: \"34b3a2ba-2127-47ee-bf0a-6c7529b6143a\") " Feb 20 00:35:53 crc kubenswrapper[5107]: I0220 00:35:53.797472 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-ceilometer-entrypoint-script\") pod \"34b3a2ba-2127-47ee-bf0a-6c7529b6143a\" (UID: \"34b3a2ba-2127-47ee-bf0a-6c7529b6143a\") " Feb 20 00:35:53 crc kubenswrapper[5107]: I0220 00:35:53.797551 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-collectd-entrypoint-script\") pod \"34b3a2ba-2127-47ee-bf0a-6c7529b6143a\" (UID: \"34b3a2ba-2127-47ee-bf0a-6c7529b6143a\") " Feb 20 00:35:53 crc kubenswrapper[5107]: I0220 00:35:53.797618 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hk5q\" (UniqueName: \"kubernetes.io/projected/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-kube-api-access-9hk5q\") pod \"34b3a2ba-2127-47ee-bf0a-6c7529b6143a\" (UID: \"34b3a2ba-2127-47ee-bf0a-6c7529b6143a\") " Feb 20 00:35:53 crc kubenswrapper[5107]: I0220 00:35:53.797745 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-healthcheck-log\") pod \"34b3a2ba-2127-47ee-bf0a-6c7529b6143a\" (UID: \"34b3a2ba-2127-47ee-bf0a-6c7529b6143a\") " Feb 20 00:35:53 crc kubenswrapper[5107]: I0220 00:35:53.803522 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-kube-api-access-9hk5q" (OuterVolumeSpecName: "kube-api-access-9hk5q") pod "34b3a2ba-2127-47ee-bf0a-6c7529b6143a" (UID: "34b3a2ba-2127-47ee-bf0a-6c7529b6143a"). InnerVolumeSpecName "kube-api-access-9hk5q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:35:53 crc kubenswrapper[5107]: I0220 00:35:53.820298 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "34b3a2ba-2127-47ee-bf0a-6c7529b6143a" (UID: "34b3a2ba-2127-47ee-bf0a-6c7529b6143a"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:35:53 crc kubenswrapper[5107]: I0220 00:35:53.822266 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "34b3a2ba-2127-47ee-bf0a-6c7529b6143a" (UID: "34b3a2ba-2127-47ee-bf0a-6c7529b6143a"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:35:53 crc kubenswrapper[5107]: I0220 00:35:53.823464 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "34b3a2ba-2127-47ee-bf0a-6c7529b6143a" (UID: "34b3a2ba-2127-47ee-bf0a-6c7529b6143a"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:35:53 crc kubenswrapper[5107]: I0220 00:35:53.830251 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "34b3a2ba-2127-47ee-bf0a-6c7529b6143a" (UID: "34b3a2ba-2127-47ee-bf0a-6c7529b6143a"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:35:53 crc kubenswrapper[5107]: I0220 00:35:53.831473 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "34b3a2ba-2127-47ee-bf0a-6c7529b6143a" (UID: "34b3a2ba-2127-47ee-bf0a-6c7529b6143a"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:35:53 crc kubenswrapper[5107]: I0220 00:35:53.836741 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "34b3a2ba-2127-47ee-bf0a-6c7529b6143a" (UID: "34b3a2ba-2127-47ee-bf0a-6c7529b6143a"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 20 00:35:53 crc kubenswrapper[5107]: I0220 00:35:53.900695 5107 reconciler_common.go:299] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Feb 20 00:35:53 crc kubenswrapper[5107]: I0220 00:35:53.900740 5107 reconciler_common.go:299] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-collectd-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:35:53 crc kubenswrapper[5107]: I0220 00:35:53.900757 5107 reconciler_common.go:299] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Feb 20 00:35:53 crc kubenswrapper[5107]: I0220 00:35:53.900774 5107 reconciler_common.go:299] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Feb 20 00:35:53 crc kubenswrapper[5107]: I0220 00:35:53.900790 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9hk5q\" (UniqueName: \"kubernetes.io/projected/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-kube-api-access-9hk5q\") on node \"crc\" DevicePath \"\"" Feb 20 00:35:53 crc kubenswrapper[5107]: I0220 00:35:53.900805 5107 reconciler_common.go:299] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-healthcheck-log\") on node \"crc\" DevicePath \"\"" Feb 20 00:35:53 crc kubenswrapper[5107]: I0220 00:35:53.900819 5107 reconciler_common.go:299] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/34b3a2ba-2127-47ee-bf0a-6c7529b6143a-sensubility-config\") on node \"crc\" DevicePath \"\"" Feb 20 00:35:54 crc kubenswrapper[5107]: I0220 00:35:54.367607 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-slnss" event={"ID":"34b3a2ba-2127-47ee-bf0a-6c7529b6143a","Type":"ContainerDied","Data":"1545ee6096bec821b29013780daf79df65d00eaa0e78c2faeadc2c2615664a3b"} Feb 20 00:35:54 crc kubenswrapper[5107]: I0220 00:35:54.367664 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1545ee6096bec821b29013780daf79df65d00eaa0e78c2faeadc2c2615664a3b" Feb 20 00:35:54 crc kubenswrapper[5107]: I0220 00:35:54.367758 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-slnss" Feb 20 00:35:55 crc kubenswrapper[5107]: I0220 00:35:55.861594 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-slnss_34b3a2ba-2127-47ee-bf0a-6c7529b6143a/smoketest-collectd/0.log" Feb 20 00:35:56 crc kubenswrapper[5107]: I0220 00:35:56.159691 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-slnss_34b3a2ba-2127-47ee-bf0a-6c7529b6143a/smoketest-ceilometer/0.log" Feb 20 00:35:56 crc kubenswrapper[5107]: I0220 00:35:56.468543 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-55bf8d5cb-r9wfk_c6e6d965-40aa-4bdf-9ed1-335e26e5954b/default-interconnect/0.log" Feb 20 00:35:56 crc kubenswrapper[5107]: I0220 00:35:56.785633 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2_33411a6e-af33-4251-9cb8-b6585faf9f3d/bridge/2.log" Feb 20 00:35:57 crc kubenswrapper[5107]: I0220 00:35:57.085208 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-78nf2_33411a6e-af33-4251-9cb8-b6585faf9f3d/sg-core/0.log" Feb 20 00:35:57 crc kubenswrapper[5107]: I0220 00:35:57.404074 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz_94555f83-7ba4-4142-b52e-e8f11bb77c06/bridge/2.log" Feb 20 00:35:57 crc kubenswrapper[5107]: I0220 00:35:57.665470 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-c5d9bd866-sq7zz_94555f83-7ba4-4142-b52e-e8f11bb77c06/sg-core/0.log" Feb 20 00:35:57 crc kubenswrapper[5107]: I0220 00:35:57.908907 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp_9b0e9a7d-426a-4e9c-aa89-e2705b7002c6/bridge/2.log" Feb 20 00:35:58 crc kubenswrapper[5107]: I0220 00:35:58.175347 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-xh7kp_9b0e9a7d-426a-4e9c-aa89-e2705b7002c6/sg-core/0.log" Feb 20 00:35:58 crc kubenswrapper[5107]: I0220 00:35:58.463871 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd_0b694ae3-ffed-421b-be35-02328f0a54af/bridge/2.log" Feb 20 00:35:58 crc kubenswrapper[5107]: I0220 00:35:58.805604 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-85dbfcd77b-mzgqd_0b694ae3-ffed-421b-be35-02328f0a54af/sg-core/0.log" Feb 20 00:35:59 crc kubenswrapper[5107]: I0220 00:35:59.138704 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp_edda94c7-f2b0-4139-be2f-13ca1e43f3fe/bridge/2.log" Feb 20 00:35:59 crc kubenswrapper[5107]: I0220 00:35:59.403787 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-kqccp_edda94c7-f2b0-4139-be2f-13ca1e43f3fe/sg-core/0.log" Feb 20 00:36:00 crc kubenswrapper[5107]: I0220 00:36:00.139261 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29525796-hpxq9"] Feb 20 00:36:00 crc kubenswrapper[5107]: I0220 00:36:00.140881 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="34b3a2ba-2127-47ee-bf0a-6c7529b6143a" containerName="smoketest-ceilometer" Feb 20 00:36:00 crc kubenswrapper[5107]: I0220 00:36:00.140919 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b3a2ba-2127-47ee-bf0a-6c7529b6143a" containerName="smoketest-ceilometer" Feb 20 00:36:00 crc kubenswrapper[5107]: I0220 00:36:00.140939 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="34b3a2ba-2127-47ee-bf0a-6c7529b6143a" containerName="smoketest-collectd" Feb 20 00:36:00 crc kubenswrapper[5107]: I0220 00:36:00.140952 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b3a2ba-2127-47ee-bf0a-6c7529b6143a" containerName="smoketest-collectd" Feb 20 00:36:00 crc kubenswrapper[5107]: I0220 00:36:00.140980 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c2a229e0-2d31-4dbb-991c-8ff5afc139f1" containerName="curl" Feb 20 00:36:00 crc kubenswrapper[5107]: I0220 00:36:00.140994 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2a229e0-2d31-4dbb-991c-8ff5afc139f1" containerName="curl" Feb 20 00:36:00 crc kubenswrapper[5107]: I0220 00:36:00.141275 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="c2a229e0-2d31-4dbb-991c-8ff5afc139f1" containerName="curl" Feb 20 00:36:00 crc kubenswrapper[5107]: I0220 00:36:00.141310 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="34b3a2ba-2127-47ee-bf0a-6c7529b6143a" containerName="smoketest-collectd" Feb 20 00:36:00 crc kubenswrapper[5107]: I0220 00:36:00.141340 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="34b3a2ba-2127-47ee-bf0a-6c7529b6143a" containerName="smoketest-ceilometer" Feb 20 00:36:00 crc kubenswrapper[5107]: I0220 00:36:00.150544 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525796-hpxq9" Feb 20 00:36:00 crc kubenswrapper[5107]: I0220 00:36:00.152827 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 20 00:36:00 crc kubenswrapper[5107]: I0220 00:36:00.153023 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-km7dp\"" Feb 20 00:36:00 crc kubenswrapper[5107]: I0220 00:36:00.153295 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 20 00:36:00 crc kubenswrapper[5107]: I0220 00:36:00.168217 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525796-hpxq9"] Feb 20 00:36:00 crc kubenswrapper[5107]: I0220 00:36:00.308591 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgjmq\" (UniqueName: \"kubernetes.io/projected/95e82cc1-19f1-4456-a858-b83cb44ea055-kube-api-access-tgjmq\") pod \"auto-csr-approver-29525796-hpxq9\" (UID: \"95e82cc1-19f1-4456-a858-b83cb44ea055\") " pod="openshift-infra/auto-csr-approver-29525796-hpxq9" Feb 20 00:36:00 crc kubenswrapper[5107]: I0220 00:36:00.410131 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tgjmq\" (UniqueName: \"kubernetes.io/projected/95e82cc1-19f1-4456-a858-b83cb44ea055-kube-api-access-tgjmq\") pod \"auto-csr-approver-29525796-hpxq9\" (UID: \"95e82cc1-19f1-4456-a858-b83cb44ea055\") " pod="openshift-infra/auto-csr-approver-29525796-hpxq9" Feb 20 00:36:00 crc kubenswrapper[5107]: I0220 00:36:00.436064 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgjmq\" (UniqueName: \"kubernetes.io/projected/95e82cc1-19f1-4456-a858-b83cb44ea055-kube-api-access-tgjmq\") pod \"auto-csr-approver-29525796-hpxq9\" (UID: \"95e82cc1-19f1-4456-a858-b83cb44ea055\") " pod="openshift-infra/auto-csr-approver-29525796-hpxq9" Feb 20 00:36:00 crc kubenswrapper[5107]: I0220 00:36:00.472621 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525796-hpxq9" Feb 20 00:36:00 crc kubenswrapper[5107]: I0220 00:36:00.933711 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525796-hpxq9"] Feb 20 00:36:01 crc kubenswrapper[5107]: I0220 00:36:01.427120 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525796-hpxq9" event={"ID":"95e82cc1-19f1-4456-a858-b83cb44ea055","Type":"ContainerStarted","Data":"e892e845410aec8a2183cfff2413a87e1873293c9ef7e266ff07f91505732df1"} Feb 20 00:36:02 crc kubenswrapper[5107]: I0220 00:36:02.435711 5107 generic.go:358] "Generic (PLEG): container finished" podID="95e82cc1-19f1-4456-a858-b83cb44ea055" containerID="26323dde90fee94210fd8dcf25c36defa60918d34e4b67b10acbab626e4cb24d" exitCode=0 Feb 20 00:36:02 crc kubenswrapper[5107]: I0220 00:36:02.435778 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525796-hpxq9" event={"ID":"95e82cc1-19f1-4456-a858-b83cb44ea055","Type":"ContainerDied","Data":"26323dde90fee94210fd8dcf25c36defa60918d34e4b67b10acbab626e4cb24d"} Feb 20 00:36:03 crc kubenswrapper[5107]: I0220 00:36:03.142860 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-6fcbfcd9fd-fhjxc_bc8c9fb1-58d6-422d-ae81-e2a9cf72c811/operator/0.log" Feb 20 00:36:03 crc kubenswrapper[5107]: I0220 00:36:03.505053 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_df14b034-8115-4790-870b-81499599ef18/prometheus/0.log" Feb 20 00:36:03 crc kubenswrapper[5107]: I0220 00:36:03.760785 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525796-hpxq9" Feb 20 00:36:03 crc kubenswrapper[5107]: I0220 00:36:03.853967 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_3843b818-6c4f-4935-a475-6fac500764f9/elasticsearch/0.log" Feb 20 00:36:03 crc kubenswrapper[5107]: I0220 00:36:03.863571 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgjmq\" (UniqueName: \"kubernetes.io/projected/95e82cc1-19f1-4456-a858-b83cb44ea055-kube-api-access-tgjmq\") pod \"95e82cc1-19f1-4456-a858-b83cb44ea055\" (UID: \"95e82cc1-19f1-4456-a858-b83cb44ea055\") " Feb 20 00:36:03 crc kubenswrapper[5107]: I0220 00:36:03.870420 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95e82cc1-19f1-4456-a858-b83cb44ea055-kube-api-access-tgjmq" (OuterVolumeSpecName: "kube-api-access-tgjmq") pod "95e82cc1-19f1-4456-a858-b83cb44ea055" (UID: "95e82cc1-19f1-4456-a858-b83cb44ea055"). InnerVolumeSpecName "kube-api-access-tgjmq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:36:03 crc kubenswrapper[5107]: I0220 00:36:03.966119 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tgjmq\" (UniqueName: \"kubernetes.io/projected/95e82cc1-19f1-4456-a858-b83cb44ea055-kube-api-access-tgjmq\") on node \"crc\" DevicePath \"\"" Feb 20 00:36:04 crc kubenswrapper[5107]: I0220 00:36:04.139674 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-694dc457d5-ppgbd_60edd466-2f48-492e-9c1b-a58f4a8882a4/prometheus-webhook-snmp/0.log" Feb 20 00:36:04 crc kubenswrapper[5107]: I0220 00:36:04.453043 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_4ce5a4a3-1668-4973-a16a-6401a9a4b472/alertmanager/0.log" Feb 20 00:36:04 crc kubenswrapper[5107]: I0220 00:36:04.459852 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525796-hpxq9" event={"ID":"95e82cc1-19f1-4456-a858-b83cb44ea055","Type":"ContainerDied","Data":"e892e845410aec8a2183cfff2413a87e1873293c9ef7e266ff07f91505732df1"} Feb 20 00:36:04 crc kubenswrapper[5107]: I0220 00:36:04.459941 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e892e845410aec8a2183cfff2413a87e1873293c9ef7e266ff07f91505732df1" Feb 20 00:36:04 crc kubenswrapper[5107]: I0220 00:36:04.460065 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525796-hpxq9" Feb 20 00:36:04 crc kubenswrapper[5107]: I0220 00:36:04.842963 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29525790-7j74f"] Feb 20 00:36:04 crc kubenswrapper[5107]: I0220 00:36:04.852594 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29525790-7j74f"] Feb 20 00:36:06 crc kubenswrapper[5107]: I0220 00:36:06.494834 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="840d4a82-b92c-43b9-9731-84a5e3d55b65" path="/var/lib/kubelet/pods/840d4a82-b92c-43b9-9731-84a5e3d55b65/volumes" Feb 20 00:36:20 crc kubenswrapper[5107]: I0220 00:36:20.972680 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5cd758d596-cmlk9_c6fa5fd7-c794-4025-95a1-b237f8ee60c1/operator/0.log" Feb 20 00:36:24 crc kubenswrapper[5107]: I0220 00:36:24.679100 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-6fcbfcd9fd-fhjxc_bc8c9fb1-58d6-422d-ae81-e2a9cf72c811/operator/0.log" Feb 20 00:36:25 crc kubenswrapper[5107]: I0220 00:36:25.081012 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_dbc34c44-90b7-4482-9d55-9fd821d7f6a6/qdr/0.log" Feb 20 00:36:40 crc kubenswrapper[5107]: I0220 00:36:40.071785 5107 scope.go:117] "RemoveContainer" containerID="76dc8190e7ddff617efee41ab0e8c056ab7bfda4b3e402d816280e83ba4c06bc" Feb 20 00:36:40 crc kubenswrapper[5107]: I0220 00:36:40.176238 5107 scope.go:117] "RemoveContainer" containerID="553e606cc12b71623bd51c358c9f6f13b9d144041240ac5ee1eb28e832e867c0" Feb 20 00:36:40 crc kubenswrapper[5107]: I0220 00:36:40.282695 5107 scope.go:117] "RemoveContainer" containerID="25c7ee7cab5f21b8f45e3cde3fed2ff94669b441bad9da77494179ac94fde768" Feb 20 00:36:49 crc kubenswrapper[5107]: I0220 00:36:49.009377 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-b5g2w/must-gather-vf5wn"] Feb 20 00:36:49 crc kubenswrapper[5107]: I0220 00:36:49.010782 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="95e82cc1-19f1-4456-a858-b83cb44ea055" containerName="oc" Feb 20 00:36:49 crc kubenswrapper[5107]: I0220 00:36:49.010801 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="95e82cc1-19f1-4456-a858-b83cb44ea055" containerName="oc" Feb 20 00:36:49 crc kubenswrapper[5107]: I0220 00:36:49.010987 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="95e82cc1-19f1-4456-a858-b83cb44ea055" containerName="oc" Feb 20 00:36:49 crc kubenswrapper[5107]: I0220 00:36:49.015474 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b5g2w/must-gather-vf5wn" Feb 20 00:36:49 crc kubenswrapper[5107]: I0220 00:36:49.018827 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-b5g2w\"/\"openshift-service-ca.crt\"" Feb 20 00:36:49 crc kubenswrapper[5107]: I0220 00:36:49.019048 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-b5g2w\"/\"kube-root-ca.crt\"" Feb 20 00:36:49 crc kubenswrapper[5107]: I0220 00:36:49.021123 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-b5g2w/must-gather-vf5wn"] Feb 20 00:36:49 crc kubenswrapper[5107]: I0220 00:36:49.082059 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3860e011-8a97-4178-a087-90bd63b6694e-must-gather-output\") pod \"must-gather-vf5wn\" (UID: \"3860e011-8a97-4178-a087-90bd63b6694e\") " pod="openshift-must-gather-b5g2w/must-gather-vf5wn" Feb 20 00:36:49 crc kubenswrapper[5107]: I0220 00:36:49.082161 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q67xf\" (UniqueName: \"kubernetes.io/projected/3860e011-8a97-4178-a087-90bd63b6694e-kube-api-access-q67xf\") pod \"must-gather-vf5wn\" (UID: \"3860e011-8a97-4178-a087-90bd63b6694e\") " pod="openshift-must-gather-b5g2w/must-gather-vf5wn" Feb 20 00:36:49 crc kubenswrapper[5107]: I0220 00:36:49.183775 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3860e011-8a97-4178-a087-90bd63b6694e-must-gather-output\") pod \"must-gather-vf5wn\" (UID: \"3860e011-8a97-4178-a087-90bd63b6694e\") " pod="openshift-must-gather-b5g2w/must-gather-vf5wn" Feb 20 00:36:49 crc kubenswrapper[5107]: I0220 00:36:49.183897 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q67xf\" (UniqueName: \"kubernetes.io/projected/3860e011-8a97-4178-a087-90bd63b6694e-kube-api-access-q67xf\") pod \"must-gather-vf5wn\" (UID: \"3860e011-8a97-4178-a087-90bd63b6694e\") " pod="openshift-must-gather-b5g2w/must-gather-vf5wn" Feb 20 00:36:49 crc kubenswrapper[5107]: I0220 00:36:49.184313 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3860e011-8a97-4178-a087-90bd63b6694e-must-gather-output\") pod \"must-gather-vf5wn\" (UID: \"3860e011-8a97-4178-a087-90bd63b6694e\") " pod="openshift-must-gather-b5g2w/must-gather-vf5wn" Feb 20 00:36:49 crc kubenswrapper[5107]: I0220 00:36:49.230043 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q67xf\" (UniqueName: \"kubernetes.io/projected/3860e011-8a97-4178-a087-90bd63b6694e-kube-api-access-q67xf\") pod \"must-gather-vf5wn\" (UID: \"3860e011-8a97-4178-a087-90bd63b6694e\") " pod="openshift-must-gather-b5g2w/must-gather-vf5wn" Feb 20 00:36:49 crc kubenswrapper[5107]: I0220 00:36:49.346944 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b5g2w/must-gather-vf5wn" Feb 20 00:36:49 crc kubenswrapper[5107]: I0220 00:36:49.553372 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-b5g2w/must-gather-vf5wn"] Feb 20 00:36:49 crc kubenswrapper[5107]: I0220 00:36:49.898643 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b5g2w/must-gather-vf5wn" event={"ID":"3860e011-8a97-4178-a087-90bd63b6694e","Type":"ContainerStarted","Data":"527303dfc9da60f19a5d4adf1bf4981c0c44aafe62f56e94c888bdfa51e7ac8c"} Feb 20 00:36:55 crc kubenswrapper[5107]: I0220 00:36:55.955545 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b5g2w/must-gather-vf5wn" event={"ID":"3860e011-8a97-4178-a087-90bd63b6694e","Type":"ContainerStarted","Data":"1d4d3eb620ccd64992d5a660320582ac34188dbd924a917fd74376aaf38e4f72"} Feb 20 00:36:55 crc kubenswrapper[5107]: I0220 00:36:55.956105 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b5g2w/must-gather-vf5wn" event={"ID":"3860e011-8a97-4178-a087-90bd63b6694e","Type":"ContainerStarted","Data":"569a311cbb0d275088fb08b0ad5a81e209213f732efd968c2e05ad8d450d9eed"} Feb 20 00:36:55 crc kubenswrapper[5107]: I0220 00:36:55.976170 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-b5g2w/must-gather-vf5wn" podStartSLOduration=2.280009907 podStartE2EDuration="7.976133342s" podCreationTimestamp="2026-02-20 00:36:48 +0000 UTC" firstStartedPulling="2026-02-20 00:36:49.559913406 +0000 UTC m=+1695.928570992" lastFinishedPulling="2026-02-20 00:36:55.256036861 +0000 UTC m=+1701.624694427" observedRunningTime="2026-02-20 00:36:55.972238083 +0000 UTC m=+1702.340895669" watchObservedRunningTime="2026-02-20 00:36:55.976133342 +0000 UTC m=+1702.344790918" Feb 20 00:37:02 crc kubenswrapper[5107]: I0220 00:37:02.824680 5107 patch_prober.go:28] interesting pod/machine-config-daemon-5bqkx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:37:02 crc kubenswrapper[5107]: I0220 00:37:02.825486 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:37:24 crc kubenswrapper[5107]: I0220 00:37:24.076541 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-hnxlz"] Feb 20 00:37:24 crc kubenswrapper[5107]: I0220 00:37:24.086844 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-hnxlz" Feb 20 00:37:24 crc kubenswrapper[5107]: I0220 00:37:24.092062 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-hnxlz"] Feb 20 00:37:24 crc kubenswrapper[5107]: I0220 00:37:24.249652 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t656h\" (UniqueName: \"kubernetes.io/projected/ff946b44-e2d0-4da4-9441-22438b533692-kube-api-access-t656h\") pod \"infrawatch-operators-hnxlz\" (UID: \"ff946b44-e2d0-4da4-9441-22438b533692\") " pod="service-telemetry/infrawatch-operators-hnxlz" Feb 20 00:37:24 crc kubenswrapper[5107]: I0220 00:37:24.351652 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t656h\" (UniqueName: \"kubernetes.io/projected/ff946b44-e2d0-4da4-9441-22438b533692-kube-api-access-t656h\") pod \"infrawatch-operators-hnxlz\" (UID: \"ff946b44-e2d0-4da4-9441-22438b533692\") " pod="service-telemetry/infrawatch-operators-hnxlz" Feb 20 00:37:24 crc kubenswrapper[5107]: I0220 00:37:24.380762 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t656h\" (UniqueName: \"kubernetes.io/projected/ff946b44-e2d0-4da4-9441-22438b533692-kube-api-access-t656h\") pod \"infrawatch-operators-hnxlz\" (UID: \"ff946b44-e2d0-4da4-9441-22438b533692\") " pod="service-telemetry/infrawatch-operators-hnxlz" Feb 20 00:37:24 crc kubenswrapper[5107]: I0220 00:37:24.452051 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-hnxlz" Feb 20 00:37:24 crc kubenswrapper[5107]: I0220 00:37:24.975355 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-hnxlz"] Feb 20 00:37:25 crc kubenswrapper[5107]: I0220 00:37:25.270950 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-hnxlz" event={"ID":"ff946b44-e2d0-4da4-9441-22438b533692","Type":"ContainerStarted","Data":"e17c7874f3b5843e445f8f9e37057bfe7011b64dacf0a0e2b74f8d2d3c0cd2a9"} Feb 20 00:37:26 crc kubenswrapper[5107]: I0220 00:37:26.284545 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-hnxlz" event={"ID":"ff946b44-e2d0-4da4-9441-22438b533692","Type":"ContainerStarted","Data":"c96df5895782e8d69dafac5bfbda4f20d762735fc84c30de2bdd602f0ba5f201"} Feb 20 00:37:26 crc kubenswrapper[5107]: I0220 00:37:26.316418 5107 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-hnxlz" podStartSLOduration=2.186360275 podStartE2EDuration="2.316390204s" podCreationTimestamp="2026-02-20 00:37:24 +0000 UTC" firstStartedPulling="2026-02-20 00:37:24.9910136 +0000 UTC m=+1731.359671206" lastFinishedPulling="2026-02-20 00:37:25.121043559 +0000 UTC m=+1731.489701135" observedRunningTime="2026-02-20 00:37:26.305888478 +0000 UTC m=+1732.674546074" watchObservedRunningTime="2026-02-20 00:37:26.316390204 +0000 UTC m=+1732.685047810" Feb 20 00:37:32 crc kubenswrapper[5107]: I0220 00:37:32.824300 5107 patch_prober.go:28] interesting pod/machine-config-daemon-5bqkx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:37:32 crc kubenswrapper[5107]: I0220 00:37:32.825265 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:37:34 crc kubenswrapper[5107]: I0220 00:37:34.452501 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/infrawatch-operators-hnxlz" Feb 20 00:37:34 crc kubenswrapper[5107]: I0220 00:37:34.452788 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-hnxlz" Feb 20 00:37:34 crc kubenswrapper[5107]: I0220 00:37:34.505445 5107 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-hnxlz" Feb 20 00:37:35 crc kubenswrapper[5107]: I0220 00:37:35.394840 5107 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-hnxlz" Feb 20 00:37:35 crc kubenswrapper[5107]: I0220 00:37:35.442528 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-hnxlz"] Feb 20 00:37:37 crc kubenswrapper[5107]: I0220 00:37:37.382296 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-hnxlz" podUID="ff946b44-e2d0-4da4-9441-22438b533692" containerName="registry-server" containerID="cri-o://c96df5895782e8d69dafac5bfbda4f20d762735fc84c30de2bdd602f0ba5f201" gracePeriod=2 Feb 20 00:37:37 crc kubenswrapper[5107]: I0220 00:37:37.840211 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-hnxlz" Feb 20 00:37:37 crc kubenswrapper[5107]: I0220 00:37:37.960700 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t656h\" (UniqueName: \"kubernetes.io/projected/ff946b44-e2d0-4da4-9441-22438b533692-kube-api-access-t656h\") pod \"ff946b44-e2d0-4da4-9441-22438b533692\" (UID: \"ff946b44-e2d0-4da4-9441-22438b533692\") " Feb 20 00:37:37 crc kubenswrapper[5107]: I0220 00:37:37.967122 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff946b44-e2d0-4da4-9441-22438b533692-kube-api-access-t656h" (OuterVolumeSpecName: "kube-api-access-t656h") pod "ff946b44-e2d0-4da4-9441-22438b533692" (UID: "ff946b44-e2d0-4da4-9441-22438b533692"). InnerVolumeSpecName "kube-api-access-t656h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:37:38 crc kubenswrapper[5107]: I0220 00:37:38.063178 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t656h\" (UniqueName: \"kubernetes.io/projected/ff946b44-e2d0-4da4-9441-22438b533692-kube-api-access-t656h\") on node \"crc\" DevicePath \"\"" Feb 20 00:37:38 crc kubenswrapper[5107]: I0220 00:37:38.389183 5107 generic.go:358] "Generic (PLEG): container finished" podID="ff946b44-e2d0-4da4-9441-22438b533692" containerID="c96df5895782e8d69dafac5bfbda4f20d762735fc84c30de2bdd602f0ba5f201" exitCode=0 Feb 20 00:37:38 crc kubenswrapper[5107]: I0220 00:37:38.389306 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-hnxlz" Feb 20 00:37:38 crc kubenswrapper[5107]: I0220 00:37:38.389326 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-hnxlz" event={"ID":"ff946b44-e2d0-4da4-9441-22438b533692","Type":"ContainerDied","Data":"c96df5895782e8d69dafac5bfbda4f20d762735fc84c30de2bdd602f0ba5f201"} Feb 20 00:37:38 crc kubenswrapper[5107]: I0220 00:37:38.389389 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-hnxlz" event={"ID":"ff946b44-e2d0-4da4-9441-22438b533692","Type":"ContainerDied","Data":"e17c7874f3b5843e445f8f9e37057bfe7011b64dacf0a0e2b74f8d2d3c0cd2a9"} Feb 20 00:37:38 crc kubenswrapper[5107]: I0220 00:37:38.389413 5107 scope.go:117] "RemoveContainer" containerID="c96df5895782e8d69dafac5bfbda4f20d762735fc84c30de2bdd602f0ba5f201" Feb 20 00:37:38 crc kubenswrapper[5107]: I0220 00:37:38.408837 5107 scope.go:117] "RemoveContainer" containerID="c96df5895782e8d69dafac5bfbda4f20d762735fc84c30de2bdd602f0ba5f201" Feb 20 00:37:38 crc kubenswrapper[5107]: E0220 00:37:38.409583 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c96df5895782e8d69dafac5bfbda4f20d762735fc84c30de2bdd602f0ba5f201\": container with ID starting with c96df5895782e8d69dafac5bfbda4f20d762735fc84c30de2bdd602f0ba5f201 not found: ID does not exist" containerID="c96df5895782e8d69dafac5bfbda4f20d762735fc84c30de2bdd602f0ba5f201" Feb 20 00:37:38 crc kubenswrapper[5107]: I0220 00:37:38.409639 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c96df5895782e8d69dafac5bfbda4f20d762735fc84c30de2bdd602f0ba5f201"} err="failed to get container status \"c96df5895782e8d69dafac5bfbda4f20d762735fc84c30de2bdd602f0ba5f201\": rpc error: code = NotFound desc = could not find container \"c96df5895782e8d69dafac5bfbda4f20d762735fc84c30de2bdd602f0ba5f201\": container with ID starting with c96df5895782e8d69dafac5bfbda4f20d762735fc84c30de2bdd602f0ba5f201 not found: ID does not exist" Feb 20 00:37:38 crc kubenswrapper[5107]: I0220 00:37:38.425627 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-hnxlz"] Feb 20 00:37:38 crc kubenswrapper[5107]: I0220 00:37:38.431158 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-hnxlz"] Feb 20 00:37:38 crc kubenswrapper[5107]: I0220 00:37:38.502111 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff946b44-e2d0-4da4-9441-22438b533692" path="/var/lib/kubelet/pods/ff946b44-e2d0-4da4-9441-22438b533692/volumes" Feb 20 00:37:40 crc kubenswrapper[5107]: I0220 00:37:40.435604 5107 scope.go:117] "RemoveContainer" containerID="e73eec1422aae264f736e239da0725927404f962c13e11b6b073eb36d8d21aca" Feb 20 00:37:40 crc kubenswrapper[5107]: I0220 00:37:40.531432 5107 scope.go:117] "RemoveContainer" containerID="61f38785f3aeb506c7202f95f268d5313ee73eae05fb53f9c6f8bd41162da3c5" Feb 20 00:37:46 crc kubenswrapper[5107]: I0220 00:37:46.040297 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-75ffdb6fcd-tcpmn_6623692b-4959-4045-8da6-f64819b323e9/control-plane-machine-set-operator/0.log" Feb 20 00:37:46 crc kubenswrapper[5107]: I0220 00:37:46.150214 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-nksz9_38dcb891-7354-413d-ba1d-016f0522c1bb/machine-api-operator/0.log" Feb 20 00:37:46 crc kubenswrapper[5107]: I0220 00:37:46.154461 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-nksz9_38dcb891-7354-413d-ba1d-016f0522c1bb/kube-rbac-proxy/0.log" Feb 20 00:37:59 crc kubenswrapper[5107]: I0220 00:37:59.653879 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-759f64656b-lmsmj_b0faf488-8462-43cb-8579-20a0407bdfd9/cert-manager-controller/0.log" Feb 20 00:37:59 crc kubenswrapper[5107]: I0220 00:37:59.806781 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-8966b78d4-zxrwj_281d60a7-353e-4833-a849-d020232dc2c8/cert-manager-cainjector/0.log" Feb 20 00:37:59 crc kubenswrapper[5107]: I0220 00:37:59.839803 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-597b96b99b-pnlqb_e2235000-3217-4c46-8d84-a0c0e737d469/cert-manager-webhook/0.log" Feb 20 00:38:00 crc kubenswrapper[5107]: I0220 00:38:00.154136 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29525798-h4g9c"] Feb 20 00:38:00 crc kubenswrapper[5107]: I0220 00:38:00.155683 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ff946b44-e2d0-4da4-9441-22438b533692" containerName="registry-server" Feb 20 00:38:00 crc kubenswrapper[5107]: I0220 00:38:00.155704 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff946b44-e2d0-4da4-9441-22438b533692" containerName="registry-server" Feb 20 00:38:00 crc kubenswrapper[5107]: I0220 00:38:00.155826 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="ff946b44-e2d0-4da4-9441-22438b533692" containerName="registry-server" Feb 20 00:38:00 crc kubenswrapper[5107]: I0220 00:38:00.164479 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525798-h4g9c"] Feb 20 00:38:00 crc kubenswrapper[5107]: I0220 00:38:00.164588 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525798-h4g9c" Feb 20 00:38:00 crc kubenswrapper[5107]: I0220 00:38:00.167706 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 20 00:38:00 crc kubenswrapper[5107]: I0220 00:38:00.167925 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-km7dp\"" Feb 20 00:38:00 crc kubenswrapper[5107]: I0220 00:38:00.168074 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 20 00:38:00 crc kubenswrapper[5107]: I0220 00:38:00.208042 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2kq8\" (UniqueName: \"kubernetes.io/projected/a9fa2bba-cc92-43d3-a4c1-5691a1927e91-kube-api-access-f2kq8\") pod \"auto-csr-approver-29525798-h4g9c\" (UID: \"a9fa2bba-cc92-43d3-a4c1-5691a1927e91\") " pod="openshift-infra/auto-csr-approver-29525798-h4g9c" Feb 20 00:38:00 crc kubenswrapper[5107]: I0220 00:38:00.309090 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f2kq8\" (UniqueName: \"kubernetes.io/projected/a9fa2bba-cc92-43d3-a4c1-5691a1927e91-kube-api-access-f2kq8\") pod \"auto-csr-approver-29525798-h4g9c\" (UID: \"a9fa2bba-cc92-43d3-a4c1-5691a1927e91\") " pod="openshift-infra/auto-csr-approver-29525798-h4g9c" Feb 20 00:38:00 crc kubenswrapper[5107]: I0220 00:38:00.336521 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2kq8\" (UniqueName: \"kubernetes.io/projected/a9fa2bba-cc92-43d3-a4c1-5691a1927e91-kube-api-access-f2kq8\") pod \"auto-csr-approver-29525798-h4g9c\" (UID: \"a9fa2bba-cc92-43d3-a4c1-5691a1927e91\") " pod="openshift-infra/auto-csr-approver-29525798-h4g9c" Feb 20 00:38:00 crc kubenswrapper[5107]: I0220 00:38:00.499847 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525798-h4g9c" Feb 20 00:38:00 crc kubenswrapper[5107]: I0220 00:38:00.953100 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525798-h4g9c"] Feb 20 00:38:01 crc kubenswrapper[5107]: I0220 00:38:01.592183 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525798-h4g9c" event={"ID":"a9fa2bba-cc92-43d3-a4c1-5691a1927e91","Type":"ContainerStarted","Data":"367fe8b905406c118607f324ae67a4ca3c980646070475295b5f7bd89c5b69bc"} Feb 20 00:38:02 crc kubenswrapper[5107]: I0220 00:38:02.603875 5107 generic.go:358] "Generic (PLEG): container finished" podID="a9fa2bba-cc92-43d3-a4c1-5691a1927e91" containerID="e0a08478309114c93af7127903fc0a798d8f1b5a515c16980b5685528cc64658" exitCode=0 Feb 20 00:38:02 crc kubenswrapper[5107]: I0220 00:38:02.603932 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525798-h4g9c" event={"ID":"a9fa2bba-cc92-43d3-a4c1-5691a1927e91","Type":"ContainerDied","Data":"e0a08478309114c93af7127903fc0a798d8f1b5a515c16980b5685528cc64658"} Feb 20 00:38:02 crc kubenswrapper[5107]: I0220 00:38:02.824750 5107 patch_prober.go:28] interesting pod/machine-config-daemon-5bqkx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 00:38:02 crc kubenswrapper[5107]: I0220 00:38:02.824837 5107 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 00:38:02 crc kubenswrapper[5107]: I0220 00:38:02.824972 5107 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" Feb 20 00:38:02 crc kubenswrapper[5107]: I0220 00:38:02.825854 5107 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1412f0b94e946f14beed198d864b314c0357f03a31e892ba17011c0b23e31777"} pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 20 00:38:02 crc kubenswrapper[5107]: I0220 00:38:02.825963 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" containerName="machine-config-daemon" containerID="cri-o://1412f0b94e946f14beed198d864b314c0357f03a31e892ba17011c0b23e31777" gracePeriod=600 Feb 20 00:38:02 crc kubenswrapper[5107]: E0220 00:38:02.948381 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5bqkx_openshift-machine-config-operator(2a8cc693-438e-4d3b-8865-7d3907f9dc78)\"" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" Feb 20 00:38:03 crc kubenswrapper[5107]: I0220 00:38:03.613785 5107 generic.go:358] "Generic (PLEG): container finished" podID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" containerID="1412f0b94e946f14beed198d864b314c0357f03a31e892ba17011c0b23e31777" exitCode=0 Feb 20 00:38:03 crc kubenswrapper[5107]: I0220 00:38:03.613893 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" event={"ID":"2a8cc693-438e-4d3b-8865-7d3907f9dc78","Type":"ContainerDied","Data":"1412f0b94e946f14beed198d864b314c0357f03a31e892ba17011c0b23e31777"} Feb 20 00:38:03 crc kubenswrapper[5107]: I0220 00:38:03.614388 5107 scope.go:117] "RemoveContainer" containerID="caab76c5ecaf8c514147de4913329952f9a899f93a1beb03729a59c67867fcdc" Feb 20 00:38:03 crc kubenswrapper[5107]: I0220 00:38:03.615047 5107 scope.go:117] "RemoveContainer" containerID="1412f0b94e946f14beed198d864b314c0357f03a31e892ba17011c0b23e31777" Feb 20 00:38:03 crc kubenswrapper[5107]: E0220 00:38:03.615733 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5bqkx_openshift-machine-config-operator(2a8cc693-438e-4d3b-8865-7d3907f9dc78)\"" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" Feb 20 00:38:03 crc kubenswrapper[5107]: I0220 00:38:03.876593 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525798-h4g9c" Feb 20 00:38:03 crc kubenswrapper[5107]: I0220 00:38:03.962177 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2kq8\" (UniqueName: \"kubernetes.io/projected/a9fa2bba-cc92-43d3-a4c1-5691a1927e91-kube-api-access-f2kq8\") pod \"a9fa2bba-cc92-43d3-a4c1-5691a1927e91\" (UID: \"a9fa2bba-cc92-43d3-a4c1-5691a1927e91\") " Feb 20 00:38:03 crc kubenswrapper[5107]: I0220 00:38:03.970351 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9fa2bba-cc92-43d3-a4c1-5691a1927e91-kube-api-access-f2kq8" (OuterVolumeSpecName: "kube-api-access-f2kq8") pod "a9fa2bba-cc92-43d3-a4c1-5691a1927e91" (UID: "a9fa2bba-cc92-43d3-a4c1-5691a1927e91"). InnerVolumeSpecName "kube-api-access-f2kq8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:38:04 crc kubenswrapper[5107]: I0220 00:38:04.064440 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f2kq8\" (UniqueName: \"kubernetes.io/projected/a9fa2bba-cc92-43d3-a4c1-5691a1927e91-kube-api-access-f2kq8\") on node \"crc\" DevicePath \"\"" Feb 20 00:38:04 crc kubenswrapper[5107]: I0220 00:38:04.627389 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525798-h4g9c" event={"ID":"a9fa2bba-cc92-43d3-a4c1-5691a1927e91","Type":"ContainerDied","Data":"367fe8b905406c118607f324ae67a4ca3c980646070475295b5f7bd89c5b69bc"} Feb 20 00:38:04 crc kubenswrapper[5107]: I0220 00:38:04.627429 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="367fe8b905406c118607f324ae67a4ca3c980646070475295b5f7bd89c5b69bc" Feb 20 00:38:04 crc kubenswrapper[5107]: I0220 00:38:04.627480 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525798-h4g9c" Feb 20 00:38:04 crc kubenswrapper[5107]: I0220 00:38:04.934919 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29525792-bw2g5"] Feb 20 00:38:04 crc kubenswrapper[5107]: I0220 00:38:04.940264 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29525792-bw2g5"] Feb 20 00:38:06 crc kubenswrapper[5107]: I0220 00:38:06.501606 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd49c06d-625b-4221-a044-7ffb4bb94e6c" path="/var/lib/kubelet/pods/cd49c06d-625b-4221-a044-7ffb4bb94e6c/volumes" Feb 20 00:38:15 crc kubenswrapper[5107]: I0220 00:38:15.826776 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-dqzb2_d8430d39-cf33-43a4-922b-46a1456aecfc/prometheus-operator/0.log" Feb 20 00:38:16 crc kubenswrapper[5107]: I0220 00:38:16.003855 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7c6c7796b5-fjpqg_0a556d1d-e76d-4c41-aa39-cd3da09d0fc4/prometheus-operator-admission-webhook/0.log" Feb 20 00:38:16 crc kubenswrapper[5107]: I0220 00:38:16.018521 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7c6c7796b5-r5vcm_868305ca-ba59-4b0f-9887-cc5967dd4e1e/prometheus-operator-admission-webhook/0.log" Feb 20 00:38:16 crc kubenswrapper[5107]: I0220 00:38:16.176791 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-22brv_aedefad2-67f0-4b49-bb87-80b19cf0faf5/operator/0.log" Feb 20 00:38:16 crc kubenswrapper[5107]: I0220 00:38:16.189943 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-2dsh6_5679dc1f-7e3e-4370-998c-1cbb4b0fad69/perses-operator/0.log" Feb 20 00:38:17 crc kubenswrapper[5107]: I0220 00:38:17.486091 5107 scope.go:117] "RemoveContainer" containerID="1412f0b94e946f14beed198d864b314c0357f03a31e892ba17011c0b23e31777" Feb 20 00:38:17 crc kubenswrapper[5107]: E0220 00:38:17.486455 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5bqkx_openshift-machine-config-operator(2a8cc693-438e-4d3b-8865-7d3907f9dc78)\"" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" Feb 20 00:38:31 crc kubenswrapper[5107]: I0220 00:38:31.241116 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12mtwf_f448c99d-c280-4172-8725-2e9de584ef00/util/0.log" Feb 20 00:38:31 crc kubenswrapper[5107]: I0220 00:38:31.499601 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12mtwf_f448c99d-c280-4172-8725-2e9de584ef00/util/0.log" Feb 20 00:38:31 crc kubenswrapper[5107]: I0220 00:38:31.500295 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12mtwf_f448c99d-c280-4172-8725-2e9de584ef00/pull/0.log" Feb 20 00:38:31 crc kubenswrapper[5107]: I0220 00:38:31.525834 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12mtwf_f448c99d-c280-4172-8725-2e9de584ef00/pull/0.log" Feb 20 00:38:31 crc kubenswrapper[5107]: I0220 00:38:31.629733 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12mtwf_f448c99d-c280-4172-8725-2e9de584ef00/util/0.log" Feb 20 00:38:31 crc kubenswrapper[5107]: I0220 00:38:31.667558 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12mtwf_f448c99d-c280-4172-8725-2e9de584ef00/pull/0.log" Feb 20 00:38:31 crc kubenswrapper[5107]: I0220 00:38:31.669245 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f12mtwf_f448c99d-c280-4172-8725-2e9de584ef00/extract/0.log" Feb 20 00:38:31 crc kubenswrapper[5107]: I0220 00:38:31.811593 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fndqhp_25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c/util/0.log" Feb 20 00:38:31 crc kubenswrapper[5107]: I0220 00:38:31.970309 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fndqhp_25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c/pull/0.log" Feb 20 00:38:31 crc kubenswrapper[5107]: I0220 00:38:31.972294 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fndqhp_25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c/util/0.log" Feb 20 00:38:32 crc kubenswrapper[5107]: I0220 00:38:32.001283 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fndqhp_25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c/pull/0.log" Feb 20 00:38:32 crc kubenswrapper[5107]: I0220 00:38:32.148976 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fndqhp_25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c/pull/0.log" Feb 20 00:38:32 crc kubenswrapper[5107]: I0220 00:38:32.185541 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fndqhp_25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c/util/0.log" Feb 20 00:38:32 crc kubenswrapper[5107]: I0220 00:38:32.204098 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fndqhp_25f9a9ab-c012-4d7e-a60b-7c3f5a13e82c/extract/0.log" Feb 20 00:38:32 crc kubenswrapper[5107]: I0220 00:38:32.331818 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hjbc7_918d6f5a-f717-46f4-b49b-b057f68828da/util/0.log" Feb 20 00:38:32 crc kubenswrapper[5107]: I0220 00:38:32.486170 5107 scope.go:117] "RemoveContainer" containerID="1412f0b94e946f14beed198d864b314c0357f03a31e892ba17011c0b23e31777" Feb 20 00:38:32 crc kubenswrapper[5107]: E0220 00:38:32.486472 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5bqkx_openshift-machine-config-operator(2a8cc693-438e-4d3b-8865-7d3907f9dc78)\"" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" Feb 20 00:38:32 crc kubenswrapper[5107]: I0220 00:38:32.525874 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hjbc7_918d6f5a-f717-46f4-b49b-b057f68828da/util/0.log" Feb 20 00:38:32 crc kubenswrapper[5107]: I0220 00:38:32.547825 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hjbc7_918d6f5a-f717-46f4-b49b-b057f68828da/pull/0.log" Feb 20 00:38:32 crc kubenswrapper[5107]: I0220 00:38:32.588637 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hjbc7_918d6f5a-f717-46f4-b49b-b057f68828da/pull/0.log" Feb 20 00:38:32 crc kubenswrapper[5107]: I0220 00:38:32.706775 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hjbc7_918d6f5a-f717-46f4-b49b-b057f68828da/util/0.log" Feb 20 00:38:32 crc kubenswrapper[5107]: I0220 00:38:32.770523 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hjbc7_918d6f5a-f717-46f4-b49b-b057f68828da/pull/0.log" Feb 20 00:38:32 crc kubenswrapper[5107]: I0220 00:38:32.793984 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hjbc7_918d6f5a-f717-46f4-b49b-b057f68828da/extract/0.log" Feb 20 00:38:32 crc kubenswrapper[5107]: I0220 00:38:32.912337 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xn75s_e8434afa-61e1-42ab-9856-5c4ca12d855e/util/0.log" Feb 20 00:38:33 crc kubenswrapper[5107]: I0220 00:38:33.061598 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xn75s_e8434afa-61e1-42ab-9856-5c4ca12d855e/pull/0.log" Feb 20 00:38:33 crc kubenswrapper[5107]: I0220 00:38:33.071028 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xn75s_e8434afa-61e1-42ab-9856-5c4ca12d855e/pull/0.log" Feb 20 00:38:33 crc kubenswrapper[5107]: I0220 00:38:33.072054 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xn75s_e8434afa-61e1-42ab-9856-5c4ca12d855e/util/0.log" Feb 20 00:38:33 crc kubenswrapper[5107]: I0220 00:38:33.223451 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xn75s_e8434afa-61e1-42ab-9856-5c4ca12d855e/pull/0.log" Feb 20 00:38:33 crc kubenswrapper[5107]: I0220 00:38:33.244636 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xn75s_e8434afa-61e1-42ab-9856-5c4ca12d855e/util/0.log" Feb 20 00:38:33 crc kubenswrapper[5107]: I0220 00:38:33.268748 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xn75s_e8434afa-61e1-42ab-9856-5c4ca12d855e/extract/0.log" Feb 20 00:38:33 crc kubenswrapper[5107]: I0220 00:38:33.394972 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8d86r_c874fdaa-8bc3-4b35-b322-420aca76db11/extract-utilities/0.log" Feb 20 00:38:33 crc kubenswrapper[5107]: I0220 00:38:33.544028 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8d86r_c874fdaa-8bc3-4b35-b322-420aca76db11/extract-content/0.log" Feb 20 00:38:33 crc kubenswrapper[5107]: I0220 00:38:33.549837 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8d86r_c874fdaa-8bc3-4b35-b322-420aca76db11/extract-utilities/0.log" Feb 20 00:38:33 crc kubenswrapper[5107]: I0220 00:38:33.554909 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8d86r_c874fdaa-8bc3-4b35-b322-420aca76db11/extract-content/0.log" Feb 20 00:38:33 crc kubenswrapper[5107]: I0220 00:38:33.706661 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8d86r_c874fdaa-8bc3-4b35-b322-420aca76db11/extract-content/0.log" Feb 20 00:38:33 crc kubenswrapper[5107]: I0220 00:38:33.713019 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8d86r_c874fdaa-8bc3-4b35-b322-420aca76db11/extract-utilities/0.log" Feb 20 00:38:33 crc kubenswrapper[5107]: I0220 00:38:33.874446 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8d86r_c874fdaa-8bc3-4b35-b322-420aca76db11/registry-server/0.log" Feb 20 00:38:33 crc kubenswrapper[5107]: I0220 00:38:33.908683 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rtklb_a99f89f1-7b96-4a0d-9de6-5930f350e330/extract-utilities/0.log" Feb 20 00:38:34 crc kubenswrapper[5107]: I0220 00:38:34.033404 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rtklb_a99f89f1-7b96-4a0d-9de6-5930f350e330/extract-utilities/0.log" Feb 20 00:38:34 crc kubenswrapper[5107]: I0220 00:38:34.043438 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rtklb_a99f89f1-7b96-4a0d-9de6-5930f350e330/extract-content/0.log" Feb 20 00:38:34 crc kubenswrapper[5107]: I0220 00:38:34.049569 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rtklb_a99f89f1-7b96-4a0d-9de6-5930f350e330/extract-content/0.log" Feb 20 00:38:34 crc kubenswrapper[5107]: I0220 00:38:34.212893 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rtklb_a99f89f1-7b96-4a0d-9de6-5930f350e330/extract-content/0.log" Feb 20 00:38:34 crc kubenswrapper[5107]: I0220 00:38:34.233897 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rtklb_a99f89f1-7b96-4a0d-9de6-5930f350e330/extract-utilities/0.log" Feb 20 00:38:34 crc kubenswrapper[5107]: I0220 00:38:34.276771 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-hngwn_70393761-4fbb-4ab6-81c2-f0542f67f775/marketplace-operator/0.log" Feb 20 00:38:34 crc kubenswrapper[5107]: I0220 00:38:34.457297 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rtklb_a99f89f1-7b96-4a0d-9de6-5930f350e330/registry-server/0.log" Feb 20 00:38:34 crc kubenswrapper[5107]: I0220 00:38:34.482211 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xbhv6_79e18e26-a4b7-4c39-a891-132a1e36d2d2/extract-utilities/0.log" Feb 20 00:38:34 crc kubenswrapper[5107]: I0220 00:38:34.614215 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xbhv6_79e18e26-a4b7-4c39-a891-132a1e36d2d2/extract-utilities/0.log" Feb 20 00:38:34 crc kubenswrapper[5107]: I0220 00:38:34.642006 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xbhv6_79e18e26-a4b7-4c39-a891-132a1e36d2d2/extract-content/0.log" Feb 20 00:38:34 crc kubenswrapper[5107]: I0220 00:38:34.643710 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xbhv6_79e18e26-a4b7-4c39-a891-132a1e36d2d2/extract-content/0.log" Feb 20 00:38:34 crc kubenswrapper[5107]: I0220 00:38:34.755511 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xbhv6_79e18e26-a4b7-4c39-a891-132a1e36d2d2/extract-utilities/0.log" Feb 20 00:38:34 crc kubenswrapper[5107]: I0220 00:38:34.783448 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xbhv6_79e18e26-a4b7-4c39-a891-132a1e36d2d2/extract-content/0.log" Feb 20 00:38:35 crc kubenswrapper[5107]: I0220 00:38:35.067554 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xbhv6_79e18e26-a4b7-4c39-a891-132a1e36d2d2/registry-server/0.log" Feb 20 00:38:35 crc kubenswrapper[5107]: I0220 00:38:35.358859 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fnskd_c9d08e95-6328-4e97-aab4-4dd9913914cc/kube-multus/0.log" Feb 20 00:38:35 crc kubenswrapper[5107]: I0220 00:38:35.365076 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fnskd_c9d08e95-6328-4e97-aab4-4dd9913914cc/kube-multus/0.log" Feb 20 00:38:35 crc kubenswrapper[5107]: I0220 00:38:35.379945 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 20 00:38:35 crc kubenswrapper[5107]: I0220 00:38:35.382286 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 20 00:38:40 crc kubenswrapper[5107]: I0220 00:38:40.684906 5107 scope.go:117] "RemoveContainer" containerID="b66309aef4fa4bfc55b301e0d9ba510252b0a5386c3917d41bf5036502c35f10" Feb 20 00:38:47 crc kubenswrapper[5107]: I0220 00:38:47.487137 5107 scope.go:117] "RemoveContainer" containerID="1412f0b94e946f14beed198d864b314c0357f03a31e892ba17011c0b23e31777" Feb 20 00:38:47 crc kubenswrapper[5107]: E0220 00:38:47.488236 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5bqkx_openshift-machine-config-operator(2a8cc693-438e-4d3b-8865-7d3907f9dc78)\"" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" Feb 20 00:38:48 crc kubenswrapper[5107]: I0220 00:38:48.653343 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7c6c7796b5-fjpqg_0a556d1d-e76d-4c41-aa39-cd3da09d0fc4/prometheus-operator-admission-webhook/0.log" Feb 20 00:38:48 crc kubenswrapper[5107]: I0220 00:38:48.668253 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-dqzb2_d8430d39-cf33-43a4-922b-46a1456aecfc/prometheus-operator/0.log" Feb 20 00:38:48 crc kubenswrapper[5107]: I0220 00:38:48.715355 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7c6c7796b5-r5vcm_868305ca-ba59-4b0f-9887-cc5967dd4e1e/prometheus-operator-admission-webhook/0.log" Feb 20 00:38:48 crc kubenswrapper[5107]: I0220 00:38:48.827296 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-22brv_aedefad2-67f0-4b49-bb87-80b19cf0faf5/operator/0.log" Feb 20 00:38:48 crc kubenswrapper[5107]: I0220 00:38:48.834183 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-2dsh6_5679dc1f-7e3e-4370-998c-1cbb4b0fad69/perses-operator/0.log" Feb 20 00:38:59 crc kubenswrapper[5107]: I0220 00:38:59.486463 5107 scope.go:117] "RemoveContainer" containerID="1412f0b94e946f14beed198d864b314c0357f03a31e892ba17011c0b23e31777" Feb 20 00:38:59 crc kubenswrapper[5107]: E0220 00:38:59.489055 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5bqkx_openshift-machine-config-operator(2a8cc693-438e-4d3b-8865-7d3907f9dc78)\"" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" Feb 20 00:39:10 crc kubenswrapper[5107]: I0220 00:39:10.486736 5107 scope.go:117] "RemoveContainer" containerID="1412f0b94e946f14beed198d864b314c0357f03a31e892ba17011c0b23e31777" Feb 20 00:39:10 crc kubenswrapper[5107]: E0220 00:39:10.487541 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5bqkx_openshift-machine-config-operator(2a8cc693-438e-4d3b-8865-7d3907f9dc78)\"" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" Feb 20 00:39:22 crc kubenswrapper[5107]: I0220 00:39:22.486778 5107 scope.go:117] "RemoveContainer" containerID="1412f0b94e946f14beed198d864b314c0357f03a31e892ba17011c0b23e31777" Feb 20 00:39:22 crc kubenswrapper[5107]: E0220 00:39:22.487817 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5bqkx_openshift-machine-config-operator(2a8cc693-438e-4d3b-8865-7d3907f9dc78)\"" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" Feb 20 00:39:27 crc kubenswrapper[5107]: I0220 00:39:27.431299 5107 generic.go:358] "Generic (PLEG): container finished" podID="3860e011-8a97-4178-a087-90bd63b6694e" containerID="569a311cbb0d275088fb08b0ad5a81e209213f732efd968c2e05ad8d450d9eed" exitCode=0 Feb 20 00:39:27 crc kubenswrapper[5107]: I0220 00:39:27.431425 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b5g2w/must-gather-vf5wn" event={"ID":"3860e011-8a97-4178-a087-90bd63b6694e","Type":"ContainerDied","Data":"569a311cbb0d275088fb08b0ad5a81e209213f732efd968c2e05ad8d450d9eed"} Feb 20 00:39:27 crc kubenswrapper[5107]: I0220 00:39:27.432831 5107 scope.go:117] "RemoveContainer" containerID="569a311cbb0d275088fb08b0ad5a81e209213f732efd968c2e05ad8d450d9eed" Feb 20 00:39:28 crc kubenswrapper[5107]: I0220 00:39:28.280438 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-b5g2w_must-gather-vf5wn_3860e011-8a97-4178-a087-90bd63b6694e/gather/0.log" Feb 20 00:39:33 crc kubenswrapper[5107]: I0220 00:39:33.486006 5107 scope.go:117] "RemoveContainer" containerID="1412f0b94e946f14beed198d864b314c0357f03a31e892ba17011c0b23e31777" Feb 20 00:39:33 crc kubenswrapper[5107]: E0220 00:39:33.486765 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5bqkx_openshift-machine-config-operator(2a8cc693-438e-4d3b-8865-7d3907f9dc78)\"" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" Feb 20 00:39:34 crc kubenswrapper[5107]: I0220 00:39:34.600326 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-b5g2w/must-gather-vf5wn"] Feb 20 00:39:34 crc kubenswrapper[5107]: I0220 00:39:34.601662 5107 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-b5g2w/must-gather-vf5wn" podUID="3860e011-8a97-4178-a087-90bd63b6694e" containerName="copy" containerID="cri-o://1d4d3eb620ccd64992d5a660320582ac34188dbd924a917fd74376aaf38e4f72" gracePeriod=2 Feb 20 00:39:34 crc kubenswrapper[5107]: I0220 00:39:34.603932 5107 status_manager.go:895] "Failed to get status for pod" podUID="3860e011-8a97-4178-a087-90bd63b6694e" pod="openshift-must-gather-b5g2w/must-gather-vf5wn" err="pods \"must-gather-vf5wn\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-b5g2w\": no relationship found between node 'crc' and this object" Feb 20 00:39:34 crc kubenswrapper[5107]: I0220 00:39:34.624973 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-b5g2w/must-gather-vf5wn"] Feb 20 00:39:35 crc kubenswrapper[5107]: I0220 00:39:35.081442 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-b5g2w_must-gather-vf5wn_3860e011-8a97-4178-a087-90bd63b6694e/copy/0.log" Feb 20 00:39:35 crc kubenswrapper[5107]: I0220 00:39:35.082269 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b5g2w/must-gather-vf5wn" Feb 20 00:39:35 crc kubenswrapper[5107]: I0220 00:39:35.084001 5107 status_manager.go:895] "Failed to get status for pod" podUID="3860e011-8a97-4178-a087-90bd63b6694e" pod="openshift-must-gather-b5g2w/must-gather-vf5wn" err="pods \"must-gather-vf5wn\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-b5g2w\": no relationship found between node 'crc' and this object" Feb 20 00:39:35 crc kubenswrapper[5107]: I0220 00:39:35.159174 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3860e011-8a97-4178-a087-90bd63b6694e-must-gather-output\") pod \"3860e011-8a97-4178-a087-90bd63b6694e\" (UID: \"3860e011-8a97-4178-a087-90bd63b6694e\") " Feb 20 00:39:35 crc kubenswrapper[5107]: I0220 00:39:35.159238 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q67xf\" (UniqueName: \"kubernetes.io/projected/3860e011-8a97-4178-a087-90bd63b6694e-kube-api-access-q67xf\") pod \"3860e011-8a97-4178-a087-90bd63b6694e\" (UID: \"3860e011-8a97-4178-a087-90bd63b6694e\") " Feb 20 00:39:35 crc kubenswrapper[5107]: I0220 00:39:35.167268 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3860e011-8a97-4178-a087-90bd63b6694e-kube-api-access-q67xf" (OuterVolumeSpecName: "kube-api-access-q67xf") pod "3860e011-8a97-4178-a087-90bd63b6694e" (UID: "3860e011-8a97-4178-a087-90bd63b6694e"). InnerVolumeSpecName "kube-api-access-q67xf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:39:35 crc kubenswrapper[5107]: I0220 00:39:35.222347 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3860e011-8a97-4178-a087-90bd63b6694e-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "3860e011-8a97-4178-a087-90bd63b6694e" (UID: "3860e011-8a97-4178-a087-90bd63b6694e"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 20 00:39:35 crc kubenswrapper[5107]: I0220 00:39:35.261258 5107 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3860e011-8a97-4178-a087-90bd63b6694e-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 20 00:39:35 crc kubenswrapper[5107]: I0220 00:39:35.261298 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q67xf\" (UniqueName: \"kubernetes.io/projected/3860e011-8a97-4178-a087-90bd63b6694e-kube-api-access-q67xf\") on node \"crc\" DevicePath \"\"" Feb 20 00:39:35 crc kubenswrapper[5107]: I0220 00:39:35.515898 5107 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-b5g2w_must-gather-vf5wn_3860e011-8a97-4178-a087-90bd63b6694e/copy/0.log" Feb 20 00:39:35 crc kubenswrapper[5107]: I0220 00:39:35.516763 5107 generic.go:358] "Generic (PLEG): container finished" podID="3860e011-8a97-4178-a087-90bd63b6694e" containerID="1d4d3eb620ccd64992d5a660320582ac34188dbd924a917fd74376aaf38e4f72" exitCode=143 Feb 20 00:39:35 crc kubenswrapper[5107]: I0220 00:39:35.516883 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b5g2w/must-gather-vf5wn" Feb 20 00:39:35 crc kubenswrapper[5107]: I0220 00:39:35.516895 5107 scope.go:117] "RemoveContainer" containerID="1d4d3eb620ccd64992d5a660320582ac34188dbd924a917fd74376aaf38e4f72" Feb 20 00:39:35 crc kubenswrapper[5107]: I0220 00:39:35.518659 5107 status_manager.go:895] "Failed to get status for pod" podUID="3860e011-8a97-4178-a087-90bd63b6694e" pod="openshift-must-gather-b5g2w/must-gather-vf5wn" err="pods \"must-gather-vf5wn\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-b5g2w\": no relationship found between node 'crc' and this object" Feb 20 00:39:35 crc kubenswrapper[5107]: I0220 00:39:35.539771 5107 scope.go:117] "RemoveContainer" containerID="569a311cbb0d275088fb08b0ad5a81e209213f732efd968c2e05ad8d450d9eed" Feb 20 00:39:35 crc kubenswrapper[5107]: I0220 00:39:35.544292 5107 status_manager.go:895] "Failed to get status for pod" podUID="3860e011-8a97-4178-a087-90bd63b6694e" pod="openshift-must-gather-b5g2w/must-gather-vf5wn" err="pods \"must-gather-vf5wn\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-b5g2w\": no relationship found between node 'crc' and this object" Feb 20 00:39:35 crc kubenswrapper[5107]: I0220 00:39:35.599189 5107 scope.go:117] "RemoveContainer" containerID="1d4d3eb620ccd64992d5a660320582ac34188dbd924a917fd74376aaf38e4f72" Feb 20 00:39:35 crc kubenswrapper[5107]: E0220 00:39:35.599654 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d4d3eb620ccd64992d5a660320582ac34188dbd924a917fd74376aaf38e4f72\": container with ID starting with 1d4d3eb620ccd64992d5a660320582ac34188dbd924a917fd74376aaf38e4f72 not found: ID does not exist" containerID="1d4d3eb620ccd64992d5a660320582ac34188dbd924a917fd74376aaf38e4f72" Feb 20 00:39:35 crc kubenswrapper[5107]: I0220 00:39:35.599699 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d4d3eb620ccd64992d5a660320582ac34188dbd924a917fd74376aaf38e4f72"} err="failed to get container status \"1d4d3eb620ccd64992d5a660320582ac34188dbd924a917fd74376aaf38e4f72\": rpc error: code = NotFound desc = could not find container \"1d4d3eb620ccd64992d5a660320582ac34188dbd924a917fd74376aaf38e4f72\": container with ID starting with 1d4d3eb620ccd64992d5a660320582ac34188dbd924a917fd74376aaf38e4f72 not found: ID does not exist" Feb 20 00:39:35 crc kubenswrapper[5107]: I0220 00:39:35.599726 5107 scope.go:117] "RemoveContainer" containerID="569a311cbb0d275088fb08b0ad5a81e209213f732efd968c2e05ad8d450d9eed" Feb 20 00:39:35 crc kubenswrapper[5107]: E0220 00:39:35.600281 5107 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"569a311cbb0d275088fb08b0ad5a81e209213f732efd968c2e05ad8d450d9eed\": container with ID starting with 569a311cbb0d275088fb08b0ad5a81e209213f732efd968c2e05ad8d450d9eed not found: ID does not exist" containerID="569a311cbb0d275088fb08b0ad5a81e209213f732efd968c2e05ad8d450d9eed" Feb 20 00:39:35 crc kubenswrapper[5107]: I0220 00:39:35.600301 5107 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"569a311cbb0d275088fb08b0ad5a81e209213f732efd968c2e05ad8d450d9eed"} err="failed to get container status \"569a311cbb0d275088fb08b0ad5a81e209213f732efd968c2e05ad8d450d9eed\": rpc error: code = NotFound desc = could not find container \"569a311cbb0d275088fb08b0ad5a81e209213f732efd968c2e05ad8d450d9eed\": container with ID starting with 569a311cbb0d275088fb08b0ad5a81e209213f732efd968c2e05ad8d450d9eed not found: ID does not exist" Feb 20 00:39:36 crc kubenswrapper[5107]: I0220 00:39:36.502693 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3860e011-8a97-4178-a087-90bd63b6694e" path="/var/lib/kubelet/pods/3860e011-8a97-4178-a087-90bd63b6694e/volumes" Feb 20 00:39:47 crc kubenswrapper[5107]: I0220 00:39:47.485546 5107 scope.go:117] "RemoveContainer" containerID="1412f0b94e946f14beed198d864b314c0357f03a31e892ba17011c0b23e31777" Feb 20 00:39:47 crc kubenswrapper[5107]: E0220 00:39:47.486163 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5bqkx_openshift-machine-config-operator(2a8cc693-438e-4d3b-8865-7d3907f9dc78)\"" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" Feb 20 00:39:59 crc kubenswrapper[5107]: I0220 00:39:59.488727 5107 scope.go:117] "RemoveContainer" containerID="1412f0b94e946f14beed198d864b314c0357f03a31e892ba17011c0b23e31777" Feb 20 00:39:59 crc kubenswrapper[5107]: E0220 00:39:59.489735 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5bqkx_openshift-machine-config-operator(2a8cc693-438e-4d3b-8865-7d3907f9dc78)\"" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" Feb 20 00:40:00 crc kubenswrapper[5107]: I0220 00:40:00.149105 5107 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29525800-m7nhf"] Feb 20 00:40:00 crc kubenswrapper[5107]: I0220 00:40:00.150397 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a9fa2bba-cc92-43d3-a4c1-5691a1927e91" containerName="oc" Feb 20 00:40:00 crc kubenswrapper[5107]: I0220 00:40:00.150430 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9fa2bba-cc92-43d3-a4c1-5691a1927e91" containerName="oc" Feb 20 00:40:00 crc kubenswrapper[5107]: I0220 00:40:00.150508 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3860e011-8a97-4178-a087-90bd63b6694e" containerName="gather" Feb 20 00:40:00 crc kubenswrapper[5107]: I0220 00:40:00.150523 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="3860e011-8a97-4178-a087-90bd63b6694e" containerName="gather" Feb 20 00:40:00 crc kubenswrapper[5107]: I0220 00:40:00.150541 5107 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3860e011-8a97-4178-a087-90bd63b6694e" containerName="copy" Feb 20 00:40:00 crc kubenswrapper[5107]: I0220 00:40:00.150552 5107 state_mem.go:107] "Deleted CPUSet assignment" podUID="3860e011-8a97-4178-a087-90bd63b6694e" containerName="copy" Feb 20 00:40:00 crc kubenswrapper[5107]: I0220 00:40:00.150761 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="3860e011-8a97-4178-a087-90bd63b6694e" containerName="copy" Feb 20 00:40:00 crc kubenswrapper[5107]: I0220 00:40:00.150818 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="3860e011-8a97-4178-a087-90bd63b6694e" containerName="gather" Feb 20 00:40:00 crc kubenswrapper[5107]: I0220 00:40:00.150838 5107 memory_manager.go:356] "RemoveStaleState removing state" podUID="a9fa2bba-cc92-43d3-a4c1-5691a1927e91" containerName="oc" Feb 20 00:40:00 crc kubenswrapper[5107]: I0220 00:40:00.158667 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525800-m7nhf" Feb 20 00:40:00 crc kubenswrapper[5107]: I0220 00:40:00.161943 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525800-m7nhf"] Feb 20 00:40:00 crc kubenswrapper[5107]: I0220 00:40:00.162728 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 20 00:40:00 crc kubenswrapper[5107]: I0220 00:40:00.162924 5107 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-km7dp\"" Feb 20 00:40:00 crc kubenswrapper[5107]: I0220 00:40:00.163800 5107 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 20 00:40:00 crc kubenswrapper[5107]: I0220 00:40:00.319513 5107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bbf5\" (UniqueName: \"kubernetes.io/projected/d9e8f19d-d622-43e1-9e63-7fbdbd548ba5-kube-api-access-5bbf5\") pod \"auto-csr-approver-29525800-m7nhf\" (UID: \"d9e8f19d-d622-43e1-9e63-7fbdbd548ba5\") " pod="openshift-infra/auto-csr-approver-29525800-m7nhf" Feb 20 00:40:00 crc kubenswrapper[5107]: I0220 00:40:00.421127 5107 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5bbf5\" (UniqueName: \"kubernetes.io/projected/d9e8f19d-d622-43e1-9e63-7fbdbd548ba5-kube-api-access-5bbf5\") pod \"auto-csr-approver-29525800-m7nhf\" (UID: \"d9e8f19d-d622-43e1-9e63-7fbdbd548ba5\") " pod="openshift-infra/auto-csr-approver-29525800-m7nhf" Feb 20 00:40:00 crc kubenswrapper[5107]: I0220 00:40:00.456711 5107 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bbf5\" (UniqueName: \"kubernetes.io/projected/d9e8f19d-d622-43e1-9e63-7fbdbd548ba5-kube-api-access-5bbf5\") pod \"auto-csr-approver-29525800-m7nhf\" (UID: \"d9e8f19d-d622-43e1-9e63-7fbdbd548ba5\") " pod="openshift-infra/auto-csr-approver-29525800-m7nhf" Feb 20 00:40:00 crc kubenswrapper[5107]: I0220 00:40:00.480560 5107 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525800-m7nhf" Feb 20 00:40:00 crc kubenswrapper[5107]: I0220 00:40:00.769980 5107 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29525800-m7nhf"] Feb 20 00:40:01 crc kubenswrapper[5107]: I0220 00:40:01.797979 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525800-m7nhf" event={"ID":"d9e8f19d-d622-43e1-9e63-7fbdbd548ba5","Type":"ContainerStarted","Data":"dcd001ee1c45b30866d5becf6b647b5af030cd19e70662a422f7ba4f75b9958a"} Feb 20 00:40:02 crc kubenswrapper[5107]: I0220 00:40:02.811959 5107 generic.go:358] "Generic (PLEG): container finished" podID="d9e8f19d-d622-43e1-9e63-7fbdbd548ba5" containerID="18d66d519d7ef3465c3a49bb73b344fa3fb98c4c9d99fed5f25442007c9c402d" exitCode=0 Feb 20 00:40:02 crc kubenswrapper[5107]: I0220 00:40:02.812180 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525800-m7nhf" event={"ID":"d9e8f19d-d622-43e1-9e63-7fbdbd548ba5","Type":"ContainerDied","Data":"18d66d519d7ef3465c3a49bb73b344fa3fb98c4c9d99fed5f25442007c9c402d"} Feb 20 00:40:04 crc kubenswrapper[5107]: I0220 00:40:04.149698 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525800-m7nhf" Feb 20 00:40:04 crc kubenswrapper[5107]: I0220 00:40:04.229000 5107 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bbf5\" (UniqueName: \"kubernetes.io/projected/d9e8f19d-d622-43e1-9e63-7fbdbd548ba5-kube-api-access-5bbf5\") pod \"d9e8f19d-d622-43e1-9e63-7fbdbd548ba5\" (UID: \"d9e8f19d-d622-43e1-9e63-7fbdbd548ba5\") " Feb 20 00:40:04 crc kubenswrapper[5107]: I0220 00:40:04.238840 5107 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9e8f19d-d622-43e1-9e63-7fbdbd548ba5-kube-api-access-5bbf5" (OuterVolumeSpecName: "kube-api-access-5bbf5") pod "d9e8f19d-d622-43e1-9e63-7fbdbd548ba5" (UID: "d9e8f19d-d622-43e1-9e63-7fbdbd548ba5"). InnerVolumeSpecName "kube-api-access-5bbf5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 20 00:40:04 crc kubenswrapper[5107]: I0220 00:40:04.332801 5107 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5bbf5\" (UniqueName: \"kubernetes.io/projected/d9e8f19d-d622-43e1-9e63-7fbdbd548ba5-kube-api-access-5bbf5\") on node \"crc\" DevicePath \"\"" Feb 20 00:40:04 crc kubenswrapper[5107]: I0220 00:40:04.835490 5107 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29525800-m7nhf" Feb 20 00:40:04 crc kubenswrapper[5107]: I0220 00:40:04.835509 5107 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29525800-m7nhf" event={"ID":"d9e8f19d-d622-43e1-9e63-7fbdbd548ba5","Type":"ContainerDied","Data":"dcd001ee1c45b30866d5becf6b647b5af030cd19e70662a422f7ba4f75b9958a"} Feb 20 00:40:04 crc kubenswrapper[5107]: I0220 00:40:04.835670 5107 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcd001ee1c45b30866d5becf6b647b5af030cd19e70662a422f7ba4f75b9958a" Feb 20 00:40:05 crc kubenswrapper[5107]: I0220 00:40:05.251614 5107 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29525794-gf9xm"] Feb 20 00:40:05 crc kubenswrapper[5107]: I0220 00:40:05.263770 5107 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29525794-gf9xm"] Feb 20 00:40:06 crc kubenswrapper[5107]: I0220 00:40:06.500848 5107 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54deb717-c484-4ab7-89df-46458a4846ec" path="/var/lib/kubelet/pods/54deb717-c484-4ab7-89df-46458a4846ec/volumes" Feb 20 00:40:12 crc kubenswrapper[5107]: I0220 00:40:12.497699 5107 scope.go:117] "RemoveContainer" containerID="1412f0b94e946f14beed198d864b314c0357f03a31e892ba17011c0b23e31777" Feb 20 00:40:12 crc kubenswrapper[5107]: E0220 00:40:12.498367 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5bqkx_openshift-machine-config-operator(2a8cc693-438e-4d3b-8865-7d3907f9dc78)\"" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" Feb 20 00:40:24 crc kubenswrapper[5107]: I0220 00:40:24.490700 5107 scope.go:117] "RemoveContainer" containerID="1412f0b94e946f14beed198d864b314c0357f03a31e892ba17011c0b23e31777" Feb 20 00:40:24 crc kubenswrapper[5107]: E0220 00:40:24.492010 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5bqkx_openshift-machine-config-operator(2a8cc693-438e-4d3b-8865-7d3907f9dc78)\"" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" Feb 20 00:40:38 crc kubenswrapper[5107]: I0220 00:40:38.486486 5107 scope.go:117] "RemoveContainer" containerID="1412f0b94e946f14beed198d864b314c0357f03a31e892ba17011c0b23e31777" Feb 20 00:40:38 crc kubenswrapper[5107]: E0220 00:40:38.487335 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5bqkx_openshift-machine-config-operator(2a8cc693-438e-4d3b-8865-7d3907f9dc78)\"" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78" Feb 20 00:40:40 crc kubenswrapper[5107]: I0220 00:40:40.862535 5107 scope.go:117] "RemoveContainer" containerID="6fb328d4f4d1ca4634fd556c1eaefaed1bd8da9521f8a3641fa8eba8fcd7e07b" Feb 20 00:40:53 crc kubenswrapper[5107]: I0220 00:40:53.486381 5107 scope.go:117] "RemoveContainer" containerID="1412f0b94e946f14beed198d864b314c0357f03a31e892ba17011c0b23e31777" Feb 20 00:40:53 crc kubenswrapper[5107]: E0220 00:40:53.487605 5107 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5bqkx_openshift-machine-config-operator(2a8cc693-438e-4d3b-8865-7d3907f9dc78)\"" pod="openshift-machine-config-operator/machine-config-daemon-5bqkx" podUID="2a8cc693-438e-4d3b-8865-7d3907f9dc78"